Discover more from Aidful News
🤝 AidfulAI Newsletter #3: A Thought-Provoking Idea to Tackle Your Top 12 Problems
Major AI News
🖼️📷 Midjourney's Version 5.1 Brings More Opinionated AI-Generated Images
Midjourney has released version 5.1 of its AI image generator, which improves the quality of images produced. They state as the biggest improvement that version 5.1 is now “more opinionated”, meaning that it will produce images that have a stronger editorial feel and look more like taken by a professional photographer. However, it is still possible to use a “raw” mode that is less opinionated. Midjourney claims that version 5.1 is much easier to use with short prompts. Other improvements include higher coherence, greater accuracy in responding to text prompts, fewer unwanted borders or text artifacts and better image sharpness.
Midjourney's AI image generator is leading in quality compared to other similar tools like DALL·E 2 and Stable Diffusion. However, Midjourney is still limited to only being used on their Discord server. Since there isn't any code or API available, building on top of the service can be challenging for users who want to integrate it into other platforms or applications.
🔊🤖 Amazon Alexa: The Hub of Future AI
Alexa is Amazon’s cloud-based voice service available on hundreds of millions of devices from Amazon and third-party device manufacturers. Alexa transformed for many the way they interact with their homes and daily routines. So far, Alexa is mainly used for basic tasks like getting weather information, playing music or turning the lights on and off. However, with its voice in and output, the potential of Amazon Alexa as the hub of future AI cannot be overstated.
A recently leaked document from Amazon reported that the company is exploring adding new AI chatbot technology to Alexa, with the goal of making the voice assistant more capable and conversational. Amazon plans to rely on its own internally developed large language model (LLM) to power the newer and improved Alexa, rather than using OpenAI's model. The company aims to make Alexa appear as an intelligent entity that thinks and converses naturally.
🌐🚀 Google Unveils Latest Breakthroughs at Google I/O 2023
Google IO is an annual event where Google showcases its latest developments. The following is a summary of this year’s AI highlights.
The release of the ChatGPT competitor Bard is one of the most exciting announcements. Bard is now available worldwide in English, Japanese, and Korean versions. It is powered by PaLM 2, which competes with GPT-4 from OpenAI. Google also announced specialized versions, Med-PaLM 2 focused on medical and Sec-PaLM on cybersecurity knowledge.
Furthermore, Google introduced the Search Generative Experience (SGE), which updates the search engine by integrating AI chatbot's answers into the search results. This provides users with full sentences instead of just links or snippets of information.
Despite criticism about being late, these updates demonstrate Google's ongoing commitment to advancing AI technology and integrating it into their products.
🔐📜 Privacy or Chat History? Get both with your OpenAI API-key
Like stated in Issue #1 of this newsletter, OpenAI has recently realized an option to opt-out of having your data used to train the upcoming version of ChatGPT. However, this comes with a downside of the chat history no longer being accessible to you.
An alternative to this is the usage of an API-key because according to the API data usage policies from OpenAI these conversations also will not be used for future training runs.
You will be charged by usage, but the API costs are quite fair. To get started, I recommend the tool TypingMind from Tony Dinh, a well-known indie hacker. This tool explains quite nice how to get and use an API-key. However, besides being freely usable in a basic version, it will ask you regularly to buy a license for the tool, which currently costs once $39. I did the latter and use the tool daily. If you are keen to know more, you might be interested in my Twitter thread about TypingMind where I describe some additional aspects and features of TypingMind.
There are completely free open-source alternatives, e.g., chatbot-ui, BetterChatGPT or turbogpt.ai, but so far, I did not test them. One final remark, please check twice if the application you intend to use with your API-key is valid, as there are for sure scams which have the intention to misuse your key.
PKM and AI
💡❓ A Thought-Provoking Idea to Tackle Your Top 12 Problems
The CODE Framework from Tiago Forte is a methodology for organizing and managing digital information in a manner that is more efficient and effective. The framework comprises four key concepts: Collect, Organize, Distill, and Express (CODE). With the ever-increasing volume of digital information available, applying the CODE Framework provides a simple, yet powerful way to manage and process information. Additionally, by leveraging the capabilities of AI, it is possible to further maximize the efficiency of the CODE Framework. In this and the upcoming issues of this newsletter, I will delve into how AI can enhance the CODE framework.
The first letter of CODE stands for “Capture” and represents the collect phase of gathering all relevant information on a specific topic. The most often asked question of what to capture, is answered by Tiago with “what resonates with you”. The latter might be just a small moment where you stop while reading and think that this is interesting. With that, it is somehow clear that today’s AI cannot do this for you. However, there is so much information out, that you cannot consume everything and need to decide what to process for capturing information you need to move forward with a project or some other task you are working on. Social media, but also other systems, use AI to generate a personal news feed for you, but you should be aware that they might filter the most valuable pieces away.
Unless you make conscious, strategic decisions about what you consume, you will live in some filter bubble. I don't have the solution for that, but I imagine a concept which would be most likely the best filter for the information overload. It is built on top of another framework communicated by Tiago Forte: 12 favorite problems. This one goes back to Richard Feynman, who received the Nobel Prize in Physics. His approach to problem-solving involved keeping a list of his “favorite problems”, which were open questions that he found himself returning to again and again in his research. By applying new findings and results he was hearing and reading to these questions, he was able to make unexpected connections and achieve breakthroughs that left others astonished. His approach required patience but helped him perceive connections that no one else could see. As a blog article from Tiago Forte suggests, you should define your 12 favorite problems.
Taking this one step further, there should be an AI which identifies out of the huge amount of information available the pieces which are most likely containing information which help you to answer one of your favorite problems. The AI will then provide you a summary tailored to your problem, which gets added to your read-it-later app. So far, this is only a concept, but if you think this is something you and the world needs, please tell me by replying to this mail.
🔒🔓 The “No Moat” Debate: Closed AI vs. Open-Source AI
The Latent Space podcast episode “No Moat: Closed AI gets its Open Source wakeup call — ft. Simon Willison” discusses a leaked document by the Google software engineer Luke Sernau which states that both Google and OpenAI have “no moat” against the open-source community in the development of LLMs. He argues that the pace of AI progress in the open-source community has been increasing rapidly, and that it is difficult for companies like Google and OpenAI to keep up.
In contrast to that, Emad Mostaque, the founder of Stability AI and an advocate of open-source and open AI models, argues that closed AI models will always outperform open ones because closed models can wrap open ones. He also acknowledges that unique usage data, content, talent, product, and business models can act as moats, and Google has most of these, giving them a competitive advantage over other companies. OpenAI may have less unique content and business models than Google, but it is currently winning on the talent front.
Thanks for reading! If you are not already subscribed, enter your mail address below to receive new issues and support my work.