In this newsletter:
Post: Can You Find the AI Truth?
ICYMI: AI Updates
POTW: Signs in the US
Can You Find the AI Truth?
Although Artificial Intelligence (AI) has only been ‘mainstream’ for a bit over a year now, it’s actually been around much longer. Since the 1950s, some variation of AI has been created to complete a task without user intervention. As computers - more specifically, CPUs - have become more powerful, the tasks AI can complete become more complex. So where does that leave us?
First off, that leaves us with a tool that can cause as much harm as it can good. As responsible users, we should be aware of how AI functions, so we can utilize it to the best of our ability without causing issues, doing anything illegal, or breaking any laws. We must be aware of ethical and moral issues that may come about.
AI as a tool is able to complete tasks previously requiring extreme human intelligence or skill. It does this by recognizing patterns, making decisions, holding conversations, and learning from all of the above. There’s much more it is capable of, so please understand I’m taking liberties in order to explain from a bird’s eye view.
In order to have enough data to ‘complete a task,’ the AI requires access to information. Typically, there are very specific sets of information and parameters AI can be trained on outlined within a Large Language Model (LLM). These are very difficult and expensive to build, which is one reason most AI tools pay for access to an LLM (more information on this in the coming newsletters).
One can imagine, the more parameters the LLM gives the AI, the more it may be capable of. While this is all too true, this also means the more creative it can be because it has more to learn from. Let’s compare an AI to your phone’s smart assistant. The most common AI, ChatGPT, has a different number of parameters based on the version in use. Starting with early versions in the millions, the latest version has over 1 trillion parameters. In contrast, Siri has less than 1,000 commands available. A typical Siri command is nothing more than a web search replying with an answer.
The main difference between a generative AI and smart assistant is what response is given when there’s no direct or factual answer. For example, smart assistants (Siri, Alexa, Google) can only do what they’re programmed to do within their parameters. If you ask one of them a question they can’t answer, you will typically receive a response of “I don’t know,” or “I’m sorry, I can’t answer that.”
AI has parameters, too - but it is given the ability to create outside of its walls. This is the ‘Generative’ part of AI. As in, it will — generate — something as an answer. This is an important distinction and something users must be aware of for a few reasons. It can - and will - lie. Or, it may make things up. It’s also important to note: this isn’t intentional by the programmers. No one is trying to give you false information. The AI is just doing what it was created to do — generate — information.
A prominent example of this not only made it to court but was part of a court case! A lawyer in New York used ChatGPT to reference cases proving precedent. Except all six cases cited by ChatGPT were made up. The lawyer argued he thought the tool was nothing more than an advanced search engine. Which I’m sure many people may thing after trying it out. Many students across all grade levels have undoubtedly turned in AI generated papers, too. So there is a level of ethics that needs to be agreed upon regarding AI.
Remember as we move forward, AI is based on human data. This also causes an unintentional bias sometimes. And even if it’s not inherently biased, it can be convinced to be. Granted, this isn’t the case for all major political debates, but you can absolutely ask AI to write a 5-point proposal on why cats are better than dogs. Or vice versa. In regard to other content, AI doesn’t have the ability to discern or ‘read between the lines’ as well as humans can. It can’t always infer factual evidence based on other factual evidence. Just something to be aware of.
Moving on, what about copyright? This is where it gets a bit tricky. We’ve seen how most things generated by AI cannot receive copyright, but there is a case in the courts right now that could change that. Even if things change massively for AI, the proverbial cat is already out of the bag. If someone creates a new technology designed to limit output or certain content created, there will quickly be technology available to get around it. Regulation - both legally and technically -will be very tricky.
Ultimately, it’s up to the user to make sure what information given is factually correct, or able to be used if for some type of fiction piece. There are also tons of short stories and novellas written by AI, so at some level, it’s already been allowed. Some AI chatbots will give references, and others will give them if asked. More on this coming, soon, too.
There are many who think AI will replace jobs. While this may be true, it’s also been happening for over 200 years, just under a different name than AI. I’ve said it before, but I don’t see AI replacing jobs - at least by the masses. It will, however, potentially allow those who are using AI to replace those who aren’t. My suggestion is to join the ride and start utilizing AI anyway you can to improve your life at home and work. How to do this you might ask? Watch your inboxes next week…
What are some other ways to verify AI-generated information?
ICYMI: AI Updates
Adobe announced an update to their premier PDF Reader - Acrobat. This AI Assistant is currently in Beta testing and allows users a new way to interact with their documents. Popular prompts will include chatting with the AI to ask questions and summarize the text.
Seemingly out of nowhere, Nvidia released their higher-than-expected quarterly earnings amid the demand of their chips for AI. This allowed them to break a recently set new record in the stock world - adding $272 billion to their market cap in one day. Nvidia is now the third most valuable company behind Microsoft and Apple.
With the rise in other AI softwares and tools, the Federal Trade Commission (FTC) is proposing rules around the impersonation of governments and businesses. Although we’ve seen the deepfakes (of video and audio), there are others using this technology to impersonate governmental entities and other businesses with the sole purpose to commit fraud via scams.
Last summer, Reddit started to prepare to IPO. On Thursday, they filed with the SEC. Having been founded in 2005, they reported a net loss of $90 million in 2023, but plan to move into the black as AI helps build on its search capabilities. They’ve already signed a contract for $60 million with a large unnamed AI company. This essentially will allow the given AI to interface with Reddit and utilize the content as part of the LLM.
POTW: Signs in the US
Those of us that live in the US and have been fortunate enough to travel outside our borders may have recognized a few things. We measure things differently — including weight, weather, and distance, for example. If you paid attention in any high school level science class, you should know that too, but I don’t want to assume. Something you may have not noticed is the different signage on the highways. Well, there is a simple reason why this is the case, but I don’t want to ruin the lead up to it.
Visit YouTube to see Why US Signs Look Different Than The Rest Of The World’s.
If you need a reason to take a field trip to see the different signs across the world, check out this article by Thrillist - The Places That Avid Travelers Instantly Fell in Love With.