So this month I thought I’d sign up for MasterClass and check it out. The first course promoted to me was a new one about AI called “Achieve More With GenAI”.
TLDR: It was interesting to learn about a few new ways to use ChatGPT like creating your own and I’m interested in trying out some of the other apps they used in the course. Overall though, I’m not sure I would call it a “Master” class, as oppose to a well produced summary of how 4 people in the industry use AI themselves and their opinions.
Something I found useful that they clarified was that AI is different from something like a google search insofar as their is not right or wrong answer.
When you search Google, it is literally returning a list of answers based on a database of information. There is a particular answer to a particular question.
LLMs like ChatGPT aren’t responding based on a set of accurate answers that it has stored for your question. Rather, it is responding kind of like a human does, giving you a best guess based on a lot of repetitive information that it has processed.
Similar to how you learn, you believe that your name is “Jack” and your friend’s name is “Molly” and the country you live in is “Italy” , not because you have a database of information somewhere that lists names of people and places together with their description etc.
Instead, you have been told over and over from the day you were born that your name is Jack. So now you tell people that your name is Jack. This is not an objective truth, and maybe your passport actually has the name “Jackson” in it and you know that your “real name” is Jackson because that’s the informarmation that you have received over and over. But if through your life 50% of the times someone referred to you they used the name “Jack” and the other 50% of the time, they referred to you as “Steve”, you might give different answers at different times when someone new asks you what your name is.
Similarly, LLMs are essentially just guessing based on rough averages of the most common kinds of responses to an input. Actually what they are really doing is “predicting” the next most likely response. And this isn’t even happening as a singular full answer.
When the LLM is responding, even its response is being generated as a most likely next part of the response to the first part of the response.
This is why people talk about LLMs as “hallucinating”. They don’t actually have specific answers that are right or wrong. Instead they are just dreaming up an answer based on all the information they have ever received. (Which in my opinion is pretty human-like if you think about it).
Ok, so what tools were mentioned during the course:
- ChatGPT (No surprises here)
- Interesting learning here is that ChatGPT has a “create” function where you can provide specific inputs to create your own GPT.
- PicStudio - Generate Photos of yourself with AI
- Udio Music - Create a song by describing it
- Yoodli - Ai speech and presentation coach
- Claude - Ai Assistant
- Framer - AI Website builder