Also: Google’s Bard builds on controversial LaMDA bot that engineer called ‘sentient’ The initial version of Bard will utilize a lightweight model version of LaMDA, because it requires less computing power and can be scaled to more users, according to the release. In addition to LaMDA, Bard will draw on all the information from the web to provide responses. Pichai said pulling from the web would provide “fresh, high-quality responses.” The use of LaMDA is a sharp contrast from most AI chatbot’s right now, including ChatGPT and Bing Chat, which use an LLM in the GPT series. For a full how-to click HERE. People quickly noticed that the output response was factually incorrect. As ZDNET reporter Stephanie Condon reports, the first photo of an exoplanet was taken in 2004 by the European Southern Observatory’s VLT (Very Large Telescope). Also: I asked ChatGPT write a WordPress plugin I needed. It did it in less than 5 minutes “This highlights the importance of a rigorous testing process, something that we’re kicking off this week with our Trusted Tester program,” said a Google spokesperson to ZDNET in a statement. Before Bard was released, Google’s LaMDA came under fire as well. As ZDNET’s Tiernan Ray reports, shortly after LaMDA’s publication, former Google engineer Blake Lemoine released a document in which he shared that LaMDA might be “sentient.” This controversy faded after Google denied the sentience and put Lemoine on paid administrative leave before letting him go from the company. Within the same week Google unveiled Bard, Microsoft unveiled a new AI-improved Bing, which runs on a next-generation OpenAI large language model customized specifically for search. For example, Google has developed an AI image generator, Imagen, which could be a great alternative to OpenAI’s DALL-E when released. Google also has an AI music generator, MusicLM, which Google says it has no plans to release at this point. Also: How Google Socratic can help you do homework In a recent paper discussing MusicLM, Google recognizes the risk that these kinds of models could pose to the misappropriation of creative content and inherent biases present in the training that could affect cultures underrepresented in the training, as well fears over cultural appropriation.