Hi! Thanks for visiting.
    Below is an AI chatbot for Geoff White's new book Rinsed, which reveals how tech is changing the shadowy industry of money laundering.

    Ask a question, and the chatbot will tell you what's in the book.
    For example, you might want to ask:
  • How are money launderers using cryptocurrencies like Bitcoin?
  • Who is Geoff White?
  • Why should I buy this book?
  • Feel free to contact us with any feedback!
    Who's behind this?
    Geoff White is an author and investigative journalist covering organised crime and technology.
    Jacob Weiss is an AI software developer.
    Isn't this just ChatGPT?
    No. ChatGPT and other generative AI chat software systems usually obtain answers from the entire Internet. This chatbot will only provide answers based on the content of the book.
    The answers are not quotes from the book, they're fresh sentences generated by the chatbot software.
    How does it work?
    In VERY basic terms: the software takes the entire book manuscript, and turns it into a numerical representation. Those numbers indicate the likelihood of words appearing next to or near each other. For example, in Geoff's book, the word "money" is very likely to appear next to or near the word "laundering".

    Next comes your question. For example, you might ask "How does North Korea launder money"? (To know the full story, buy a copy!). The chatbot software detects the words "money" and "launder", and thanks to its numerical map of the book, the software can see that the text contains many areas where those two words are likely to occur together. It also detects "North" and "Korea", and sees that those words occur frequently together in two areas of the book, representing the two chapters about North Korea.

    Your question also contains the word "How". "How" questions are often answered in a particular way. If I ask "how do I bake a cake?", you might respond "You use this method", or "try this technique". So now the software is looking in the numerical map of the book, to find areas where words like "method", "technique", "tactics", etc. occur near "North", "Korea", "money" and "launder".

    Next comes the answer. It's important to understand that the software has not read or understood the text - at least, not the way humans do. Instead, it's doing calculations about likelihoods of words appearing together. So when it comes to answer your question, the chatbot might decide the most likely first word is "North". The word most likely to occur next would then be "Korea". Next most likely might be "launders", and after that, "money", then "using" "this" "technique". So the answer starts to emerge: "North Korea launders money using the following technique..."
    It's essentially building word upon word to create sentences.
    What about "hallucinations"?
    Hallucinations are how the generative AI industry describes moments when the software gives bad answers. In the example above, the software might pick the first word in the answer as "North", but might decide the next likely word is "London".

    If you think about it, that would lead the chatbot down a completely different (and far less relevant) path. Rather than just creating dissatisfying answers, this can sometimes create false and/or misleading responses.

    Does this mean the chatbot software "lies"? Lying implies a moral decision, and of course, the software has not made a moral decision - it's made an incorrect mathematical decision on the likely order of words. Although for the user, the effect might feel the same.
    How can I find out more?
    Please contact us, we'd love to hear your feedback.