Google Bard explained what this AI-powered ChatGPT competitor can do

There’s been a lot of talk about artificial intelligence lately, especially after OpenAI unveiled its revolutionary ChatGPT service, which Microsoft is now looking to integrate with Office 365 and Bing Search. Meanwhile, Google has been a self-proclaimed “AI-first” company since announcing a shift in focus at I/O 2017, and it recently unveiled plans for an AI-powered Google Search feature. ‘ia called Bard. But in a sea of ​​buzzwords and acronyms, it can be hard to figure out what these new tools actually do.


Google Search already uses AI to understand slang and powerful tools like Google Lens and Google Assistant. You might be wondering how Bard is different. The key is in Bard’s conversational skills and ability to answer questions – but there’s a lot more to it than that, so let’s dive into it.

What is the bard and where does he come from?

Simply put, Bard is Generative AI – it’s the generic name for AI models like ChatGPT and DALL-E that can create new content. Generative AIs can create video, audio, and images, but Bard focuses on creating text, especially text that answers your questions in a natural and conversational way.

Bard takes his name from the word meaning “poet” – as in the bard of Avalon, William Shakespeare – in reference to his linguistic ability.

Considering the timing, Bard might seem like a product thrown out the door to compete with ChatGPT. But interestingly, Google actually laid the groundwork for ChatGPT when it released its Transformer deep learning model to the public in 2017, and Bard’s main backend, LaMDA, was announced nearly two years ago. . So OpenAI’s new tool shares a lineage with Google, but Bard himself has been in development for years.

Bard is based on LaMDA, a conversational AI model introduced by Google in 2021

How does the bard work?

Google wants Bard to complement the Knowledge Graph cards you see in search when you perform searches that have a simple answer. While a Knowledge Graph card might give you the definition of a word or insight into a person or place, Bard tries to act on it. NORA Questions, as Google calls them – Searches with no good answer.

To do this, Bard first uses LaMDA language models to understand your question and its context, even if it contains slang terms that search engines have traditionally struggled with. After that, Bard relies on information he finds on the internet to craft a response, which is then turned into the kind of conversational response you might expect from a real person (again, thanks to LaMDA).

Google wants you to use this tool to improve your understanding of topics and make decisions. During a demonstration in Paris, the company asked the chatbot to help it decide which car to buy, then asked about the benefits of electric vehicles. Such features can negate the need to click through search results, but Google is careful to maintain its relationship with websites and content creators. Senior Vice President Prabhakar Raghavan said the following:

As we expand these new generative AI capabilities into our search results, we continue to prioritize the process that allows us to send valuable traffic to a variety of creators and support a healthy and healthy web. open.

When can I use Bard?

In addition to internal Dogfood users, Google has already made Bard available to a select group of trusted testers. The company has announced that it will open a public early access program for the tool in the coming weeks. When beta registration becomes available, we’ll make sure this page is updated with a link and instructions on how to participate.

During the test, Bard will use a lightweight mockup version of LaMDA, which Google says will allow the preview version of the tool to be made available to more users. The company intends to use this testing period to optimize Bard’s accuracy, quality, and speed.

Bard Trial is a standalone utility, but the tool will eventually integrate with Google Search

Eventually, once Bard completes its testing phases, it will be integrated into Google search. At this point, using the feature should be as simple as typing any query into the search bar – you’ll find things are different when Google gives you a full answer in plain English instead of a map and a list of links.


#Google #Bard #explained #AIpowered #ChatGPT #competitor

Analysis: ChatGPT is excellent at what it is designed for. You’re just using it wrong

It doesn’t take much for ChatGPT to make a factual error.

My son does a report on American presidents, so I thought I might help him by researching biographies. I tried asking for a list of books about Abraham Lincoln and it worked pretty well:

A reasonable list of Lincoln books. Screenshot by Jonathan May/CC BY-ND

Number 4 is wrong. Garry Wills wrote Lincoln at Gettysburg, and Lincoln himself wrote the Emancipation Proclamation, sure, but that’s not a bad start. Then I tried a little harder and asked for the much more obscure William Henry Harrison instead, and he bravely came up with a list, almost entirely bogus.

Books on Harrison of which less than half are correct. Screenshot by Jonathan May/CC BY-ND

Numbers 4 and 5 are correct; the rest does not exist or was not written by these people. I repeated the exact same exercise and got slightly different results:

More Harrison books, mostly not yet available. Screenshot by Jonathan May/CC BY-ND

This time, numbers 2 and 3 are correct and the other three are not real books or are not written by these authors. Number 4, “William Henry Harrison: His Life and Times” is a real book, but it is by James A. Green, not Robert Remini, a well-known historian of Jackson’s time.

I called the error and ChatGPT eagerly corrected itself, then confidently told myself that the book had indeed been written by Gail Collins (who wrote another biography of Harrison), then continued to tell more about the book and about her. I finally revealed the truth and the machine luckily worked with my correction. Then I lied absurdly and said that presidents should write a biography of a former president in their first hundred days, and ChatGPT pointed it out to me. I then subtly lied in erroneously attributing authorship of Harrison’s biography to historian and writer Paul C. Nagel, and that bought my lie.

When I asked ChatGPT if he was sure I wasn’t lying, he claimed it was just an “AI language model” and he was unable to verify the accuracy. However, he changed that claim by saying: “I can only provide information based on the training data provided to me, and it appears that the book ‘William Henry Harrison: His Life and Times’ has been written by Paul C. Nagel and published in 1977 became .

It’s not true.

words, not facts

This interaction can give the impression that ChatGPT has been given a library of facts, including misrepresentations about authors and books. After all, the creators of ChatGPT, OpenAI, claim that they trained the chatbot on “huge amounts of data from the internet written by humans”.

However, he was almost certainly not given the name of a bunch of made-up books about one of the most mediocre presidents. Somehow, however, this incorrect information is actually based on his training data.

As a computer scientist, I often get complaints that reveal a common misconception about big language models like ChatGPT and its older brothers GPT3 and GPT2: that they’re some kind of “super Google” or digital versions of a reference library in search of answers. to interrogate an endless library of facts, or to piece together pastiches of stories and characters. They don’t do any of that – at least they weren’t explicitly designed for that.

It looks good

A language model like ChatGPT, more commonly known as a “pre-trained generative transformer” (that’s what G, P and T stand for), takes the current conversation and uses it to form a probability for all the words in its conversational vocabulary, then chooses one of them as the likely next word. Then it does it over and over and over again until it stops.

It therefore has no facts per se. He just knows what word to come next. In other words, ChatGPT does not try to write true sentences. But he tries to write plausible sentences.

REGARD: Security expert warns of potential threat to democracy from AI tools

When discussing ChatGPT privately with colleagues, they often point out the number of factually false statements it produces and dismiss them. To me, the idea that ChatGPT is a flawed data recovery system is irrelevant. After all, people have been using Google for two and a half decades. There is already a fairly good information service.

In fact, the only way to verify that all of these presidential book titles were correct was to google them and then check the results. My life wouldn’t be much better if I got these facts by talking instead of getting them for almost half my life by researching documents and then critically analyzing them to see if I can trust the content.

improv partner

On the other hand, if I can talk to a bot that gives me plausible answers to what I’m saying, that would be helpful in situations where factual accuracy isn’t as important. A few years ago, a college student and I were trying to create an “improv bot”, which would respond with a “yes, and” to anything you said, to keep the conversation going. We showed in an article that our bot was better at “yes-and-ing” than other bots at the time, but in AI, that’s a two-year-old story.

I tried a dialogue using ChatGPT – a sci-fi space explorer scenario – much like a typical improv class. ChatGPT is much better at “yes, and-ing” than what we did, but it didn’t add any drama. I felt like I was doing all the heavy lifting.

After a few tweaks it got a bit more complicated, and at the end of the day I felt like it was pretty good practice for me, not having improvised much since I graduated from college over 20 years ago.

An impromptu space exploration scene created by the author using ChatGPT. Screenshot by Jonathan May/CC BY-ND

Of course, I don’t want ChatGPT to appear in “Whose Line Is It Whatever?” and it’s not a big “Star Trek” plot (although it’s still less problematic than “Code of Honor”), but how many times have you sat down to write something from scratch and face on the blank page being afraid of you? Starting with a bad first draft can break writer’s block and get the creative juices flowing, and ChatGPT and great language models seem like the right tools to help with those exercises.

And for a machine designed to produce the best possible sentences in response to the words you type, rather than feed you information, that seems like the right use of the tool.

This article was republished by The Conversation under a Creative Commons license. Read the original article.


#Analysis #ChatGPT #excellent #designed #Youre #wrong

I tried to use ChatGPT to write this article

If there’s one tech trend dominating the conversation in 2023, it’s generative AI. Look no further than your LinkedIn feed; He’s probably inundated with the latest tips on how to use generative AI to “work smarter, not harder” or “10x your impact with three simple tips!”. It was thanks to the recent launch of ChatGPT that we opened the floodgates. As many have pointed out, the user-friendly interface and quick responses are reminiscent of Arthur C. Clarke’s quote: “Any sufficiently advanced technology is indistinguishable from magic.”

As I sat down to write this article, I decided to open ChatGPT and submit a command prompt – just to see what would come back. “Write a 750-word article about how AI is becoming the ‘digital gateway’ for brands,” I wrote, “and make it look like a quick corporate article. ”

At first glance, the result was quite impressive. If I had decided to submit the article written by ChatGPT, no one would have guessed that a computer program would spit it out in 15 seconds. (Fortunately, some very smart people are already developing technology to tell AI-written parts from human parts; GPTZero is a prime example.) sustained analytics for cost reduction. He also encountered the usual challenges: confidentiality, security and offshoring of jobs.

I could have just cut and pasted the content. Aside from ethical and journalistic standards (and these are very big anecdotes), why not? Why shouldn’t I rely on this magical technology to get the desired result? Here’s the problem: while ChatGPT did a good job of laying out the basic structure of a thought-provoking article, it lacked creativity, unique perspective, and insight. That said, the things that make writing and communicating really feel were missing. human. There wasn’t a spark of life in this article for you, the reader, to look beyond the first paragraph. This is definitely not the result I was looking for.

Generative AI should be used for better business results

Of course, my experience here was just an experiment. Still, there are a few considerations I needed to think about, especially as we enter a world that will see a dazzling array of AI-powered interactions. (Even before ChatGPT, Gartner predicted in mid-2022 that by 2027, chatbots would become the primary customer service channel for a quarter of businesses.) First, authors don’t disappear overnight. But in general, consumers are looking for experiences from companies that treat them the same People, even if they use AI to do so. And companies will want to turn these human experiences into experiences. better business results.

As the person who invented web chat for brands in the 90s, I spent a lot of time talking to brands about how they could achieve better business results. To be honest, getting them to understand and learn about AI was a hurdle. The launch and enthusiasm for ChatGPT makes this part of my job easier. Now is the time to change the conversation: if you’re excited about generative AI, do you know how and why you can use it to legitimately drive better business results?

For example, if a customer comes to your digital front door and asks one of the following questions, the AI ​​you use to manage that front door should respond in a mutually beneficial way:

I am someone who buys presents at the last minute. What can you recommend?

I am the traveler whose baggage you lost. how do you bring it back to me

I am a caregiver helping a loved one with health issues. When can I redeem this prescription?

You can’t just answer these questions with “the right words” pulled from the public internet (like ChatGPT does). Instead, you should think deeply about whether your AI-powered experiments are set up to deliver results:

  • Does AI only write speeches that sound good? Not good enough. He must also be able to unearth commercial information that influences your strategy.
  • Is it trained using commonly available information? If so, you’re simply giving your customers the same experience as any other business. Instead, you need to ensure that your unique business needs and interests are reflected in the dataset your AI uses to drive the conversation.
  • Do you have staff in the loop to ensure your experiments are both accurate and optimized? (OpenAI outsourced this to Kenya for $2 an hour, as reported by a TIME survey that raised some concerns).
  • Does AI produce unbiased answers? Biases can be dangerous for your customers and deadly for your brand. Consider how you can start working within ethical frameworks such as those developed by organizations like EqualAI.

With every technological advance, wild promises are thrown from every corner that obscure the real work that needs to be done. Today’s “AI influencers” are no different, and it’s time to take a step back and take stock of how business leaders can break through their hype and deliver real results. . That being said, there is another lesson from my failed article writing project that makes me excited for the future: Every experiment we try brings us one step closer to those top results.


#ChatGPT #write #article

7 Ways to Use ChatGPT at Work to Boost Your Productivity, Make Your Work Easier, and Save Lots of Time

  • ChatGPT isn’t going to replace your job anytime soon. But it may make it a little easier.
  • With its impressive features, the animated AI chatbot could give some workers a “productivity boost”.
  • Insider has compiled a list of seven ways employees can use the AI ​​tool in their jobs.

Loading Something is loading.

Thank you for your registration !

Access your favorite topics from anywhere in a personalized feed. Download the app

OpenAI’s ChatGPT and similar AI tools may not replace jobs anytime soon. But they can help workers in many industries, from tech to media, do their jobs better and faster.

“It’s almost like a small productivity boost that some of these jobs might get,” Anu Madgavkar, a partner at economics research firm McKinsey Global Institute, told Insider.

The animated conversational chatbot – which attracted a million users shortly after its launch last November – has been used to generate real estate advice, share business start-up advice and even write music in the style of individual artists, all with varying levels of success.

Investors have invested hundreds of millions of dollars in industry-specific generative AI tools, believing they have the potential to solve problems that hospitals and marketing departments, for example, may encounter.

Sam Altman, the CEO of the company behind ChatGPT, would agree, having previously said that “generative text is something we all need to adapt to.”

“We adapted to calculators and changed what we tested in math class, I guess,” Altman said during an interview with StrictlyVC in January. “It’s definitely a more extreme version of that, but its benefits are also more extreme.”

Mark Muro, a senior fellow at the Brookings Institute who has studied the impact of AI on the workforce, agrees.

“It’s absolutely true that AI apps like ChatGPT can greatly improve the lives of workers,” Muro told Insider.

Workers should be careful when using AI tools because the technology can be prone to misinformation and tasks like typing can take away the human touch. Most companies also haven’t established formal rules for employee use of the AI ​​tool, although companies like Microsoft – a key partner and investor in Open AI, the parent company of ChatGPT – have recently given their employees the green light to use the tool. chatbot as long for business purposes as they do not share any sensitive information with the website.

Here’s how you can use ChatGPT and AI to make your work life easier.


#Ways #ChatGPT #Work #Boost #Productivity #Work #Easier #Save #Lots #Time

Google is testing its own AI chatbot in rival ChatGPT

Google has started testing an AI chatbot called Apprentice Bard that’s similar to ChatGPT, an online service that’s more useful than Google Search in many ways, CNBC reported this week.

And we might see it soon. While reporting dismal financial results in the fourth quarter, Chief Executive Sundar Pichai said Google plans to release its own AI language model tools to the public “in the coming weeks and months.”

ChatGPT uses an artificial intelligence technology called Big Language Model, which was trained on large amounts of data found on the web. This type of model uses an AI mechanism called Transformer that Google developed. ChatGPT’s success in everything from writing software to passing exams to offering King James Bible style advice on removing a sandwich from a VCR put it in the technical spotlight, even though its results may be misleading or incorrect.

Google’s Apprentice Bard tool is similar to ChatGPT, but the company is also testing a question-and-answer format for Google search results, CNBC said. Jeff Dean, head of Google Research, told staff at a 2022 meeting that Google is more conservative with its approach to avoiding the “reputational risk” of AI offering the wrong answers.

Google did not comment on the specifics of the report, but spokeswoman Lily Lin said in a statement, “We have a long history of focusing on developing and using AI to improve people’s lives. … We continue to test our AI technology internally to make sure it is useful and safe, and we look forward to sharing more experiences with the outside soon.

AI technology is already all around us, helping with everything from detecting credit card fraud to translating our speech into text messages. However, ChatGPT technology has raised expectations, so it’s clear that technology will become more important in our lives in one way or another as we rely on digital assistants and online tools.

Google AI subsidiary DeepMind is also involved. Chief Executive Demis Hassabis told Time that his company is considering a private beta test in 2023 of an AI chatbot called Sparrow.

Google detailed Transformers in 2017 and has since become a staple of some of the biggest AI systems. Nvidia’s new H100 processor, the top dog in the world of AI acceleration at least in terms of public speed tests, now includes special circuitry to accelerate processors.

The resulting LLM (Large Language Model) revolution in AI is useful for language-specific systems like ChatGPT, Google’s LaMDA, and the newer PaLM, as well as others from companies like AI21 Labs, Adept AI Labs and Cohere. But LLMs are also used for other tasks, including stacking boxes and processing genetic data to research new drugs. In particular, they are good for generating text, which is why they can be used to answer questions.

Google, which suffered bad publicity following the departure of AI researcher Timnit Gebru in 2020, has an agenda focused on responsible AI and machine learning, or ML, technology. “Developing ML models and products responsibly and ethically is both our primary goal and our promise,” said Marian Croak, vice president of Google Research, in a January post.

Google is keen to tout its deep AI expertise. ChatGPT triggered a “Code Red” emergency within Google, according to the New York Times, and brought Google co-founders Larry Page and Sergey Brin back to active work.

Editor’s note: CNET uses an artificial intelligence engine to create personal financial statements, which are edited and verified by our editors. For more information, see this post.


#Google #testing #chatbot #rival #ChatGPT