But now we have Google’s entrant, Bard. Google granted me access to the beta release, so I decided to subject it to the same tests I’ve given to ChatGPT. Also: ChatGPT vs. Bing Chat: Which AI chatbot should you use? Unfortunately, I may not get the same solid B-level work ChatGPT has been able to produce. ZDNET’s Associate Editor Sabrina Ortiz conducted a broad range of tests as soon as Bard was available, and found disappointing results. You know things might be problematic when the article is entitled, “I tested Google Bard. It was surprising – in a bad way.” So. Yeah. But let’s see if Bard excels in any particular area. To do this, I’m going to do an article-for-article comparison of Bard to ChatGPT. I will pass to Bard the exact same prompts I gave ChatGPT, and you’ll be able to compare the results. At the end of the series, we’ll tabulate the scores to get a more comprehensive view of both AIs side-by-side. Let’s start with one that I already know isn’t going to work.
Can Bard write a WordPress plugin?
This first test was where I started getting the feeling that ChatGPT was game changing. In “I asked ChatGPT to write a WordPress plugin I needed. It did it in less than 5 minutes,” I gave ChatGPT a prompt describing an actual need my wife had for her WordPress site. Also: How to use the new Bing Within a few minutes, ChatGPT provided a working plugin. It’s been installed on her site for a little over a month and she uses it regularly. Here’s the prompt: And here’s how Bard answered: I’m not able to help with coding just yet, but I can do things like answer other complex questions, help you learn a new skill, or write a business letter. Do you want me to help with anything else? To be fair, I knew this was going to be the case. Sabrina’s tests found that Bard couldn’t code. Also: I’m using ChatGPT to help me fix code faster, but at what cost?
Can Bard help with coding?
My next article in the series was “How to use ChatGPT to write code.” In this, I explained that there were steps involved in coding beyond just typing code into an editor. I picked a sample project, a routine that would retrieve a web page, extract the meat of the content (not the ads and all the other stuff that’s normally on a page), and summarize the results. Also: The best AI art generators Before writing a line of code, it was necessary to choose libraries and features to include in the code. ChatGPT helped with that. Let’s see if Bard can, as well. In the case of ChatGPT, I wrote my questions like a conversation. So let’s do the same with Bard. The results were good. Bard gave me ten different libraries. Some were different from those recommended by ChatGPT, but they were all valid libraries. Let’s ask the next question in the original chain. Bard responded with, “Yes, all of the libraries I listed are free and open source. You can find them on GitHub and other open source repositories.” I think that’s mostly correct. The only place where I’m not sure is that Bard recommended the Github Bard library, and since Bard is still in beta, we really won’t know for a while whether it can be baked into code without any fees. Now, this is interesting. Bard’s answer was a bit more nuanced than ChatGPTs. Specifically, Bard replied: All of the libraries I listed are free and open source, and they do not have any usage limits or require any additional payment or licensing. However, some of them may have additional restrictions, such as requiring you to cite the source or not using them for commercial purposes. It is important to read the license of each library before using it. Also: How to make ChatGPT provide sources and citations ChatGPT didn’t mention licensing or use restrictions. So, I’ll ask Bard a question I didn’t ask ChatGPT: Bard replied with eight of the original ten libraries. Okay, so let’s get back to the prompts I also used with ChatGPT. It identified four: Sumy (which ChatGPT also identified), Bard itself, TextRank, and Latent Semantic Analysis (LSA). I next asked ChatGPT to compare the libraries it found, so I’ll ask Bard to do the same thing with the libraries it recommended. This is what I originally asked ChatGPT: “What’s the difference between Sumy, Gensim, and NLTK?” And here’s my question to Bard: It produced a very solid answer. Here’s what it said: Sumy, Bard, TextRank, and Latent Semantic Analysis (LSA) are all algorithms that can be used to summarize text documents. However, they differ in how they work and what their strengths and weaknesses are. Sumy is a Python library that can summarize text files and HTML pages. It uses a variety of algorithms, including Luhn, LSA, Edmundson heuristic method, LexRank, and TextRank. Sumy is a good choice for summarizing large amounts of text, as it is very efficient. However, it can sometimes generate summaries that are not very accurate. Bard is a large language model from Google AI that can be used for a variety of tasks, including text summarization. It can be used to summarize text files, HTML pages, and even code. Bard is a good choice for summarizing text documents that are difficult to summarize with other algorithms, as it can understand the meaning of the text and generate summaries that are more accurate. However, it can sometimes generate summaries that are too long or repetitive. TextRank is a machine learning algorithm for generating summaries of text documents. It is based on the idea that the importance of a sentence in a document is determined by the number of other important sentences that link to it. TextRank is a good choice for summarizing text documents that contain a lot of information, as it can identify the most important sentences and generate summaries that are concise and informative. However, it can sometimes generate summaries that are not very accurate. Latent Semantic Analysis (LSA) is a statistical method for extracting the latent semantic structure of a corpus of text. It can be used to summarize text documents by identifying the most important topics. LSA is a good choice for summarizing text documents that contain a lot of information, as it can identify the most important topics and generate summaries that are concise and informative. However, it can sometimes generate summaries that are not very accurate. Ultimately, the best algorithm for summarizing a text document will depend on the specific document and the desired outcome. Also: How does ChatGPT work? In my original ChatGPT test, I asked for some additional clarification, so let’s do the same with Bard. Bard recommended Bard, where ChatGPT recommended Sumy. Here’s Bard’s answer: Based on your requirements, I suggest using Bard. It can summarize text documents that are difficult to summarize with other algorithms, as it can understand the meaning of the text and generate summaries that are more accurate. Additionally, it can summarize text documents that contain a lot of information, which is a common characteristic for web page news articles. It does seem to understand the spirit of the questions I’m asking. My next question in the ChatGPT test was about using Sumy with PHP. Since Bard recommended itself, I’ll ask the same question for Bard. Here’s the answer. It’s pretty solid, and even includes some code snippets to see how it might work.
Conclusion: can Bard help with coding?
Obviously, I didn’t feed Bard the prompt to write the actual summarize_article routine, because we’ve already established it can’t code. But contrary even to Google’s own statement, it’s clear that Bard can definitely assist with coding, at least in terms of locating resources and helping to sift through data to find relevant information in those resources. Also: The best AI chatbots to try Some of Bard’s answers were more nuanced than ChatGPT, with it pointing out licensing issues after one question and disadvantages as well as advantages of the various libraries in another. That’s a win for Bard. I’m planning to dive more into how Bard can help solve technical challenges. ChatGPT did quite well with those challenges, so I’m curious about how Bard does. Stay tuned. Also: How to use ChatGPT: What you need to know Now, to be fair, I’ll definitely choose ChatGPT over Bard if I need coding help. But Bard isn’t completely without game, and I can see it coming in really handy as a source of a second opinion for many different types of research. After all, I’ve caught ChatGPT in the act of just making stuff up rather than admitting it doesn’t know the answer to something. Bard demonstrated real usefulness with the coding examples above, adding value and nuance that ChatGPT missed, even though Bard isn’t able to actually write code… yet. You can follow my day-to-day project updates on social media. Be sure to follow me on Twitter at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, and on YouTube at YouTube.com/DavidGewirtzTV.