DeepMind AGI article adds urgency to ethical AI



[ad_1]

Where is your business on the AI ​​adoption curve? Take our AI survey to find out.


It’s a great year for artificial intelligence. Businesses are spending more on big AI projects and new investments in AI startups are on track for a record year. All of these investments and expenses produce results that bring us all one step closer to the much-sought-after holy grail – general artificial intelligence (AGI). According to McKinsey, many academics and researchers argue that there is at least a chance that artificial intelligence at the human level will be achieved within the next decade. And one researcher says, “AGI is not a distant fantasy. It will be on us sooner than most people think.

Another boost comes from AI research lab DeepMind, which recently submitted a compelling peer-reviewed article Artificial intelligence newspaper titled “The Reward Is Enough”. They postulate that reinforcement learning – a form of deep learning based on behavioral rewards – will one day lead to mimicking human cognitive abilities and achieving AGI. This breakthrough would allow instant computation and perfect memory, leading to artificial intelligence that would outperform humans in almost any cognitive task.

We are not ready for general artificial intelligence

Despite the pillars’ assurances that AGI will benefit all of humanity, there are already real issues with today’s narrow, single-purpose AI algorithms that challenge this assumption. According to a Harvard Business Review article, when AI examples ranging from predictive policing to automated credit scoring algorithms go unchecked, they pose a serious threat to our society. A recently published survey by Pew Research of technology innovators, developers, business and policy leaders, researchers and activists reveals skepticism about the widespread implementation of ethical principles in AI. by 2030. This is due to the widespread belief that corporations will prioritize profits and that governments will continue to monitor and control their populations. If it is so difficult to enable transparency, eliminate bias, and ensure the ethical use of today’s narrow AI, then the potential for unintended consequences of AGI seems astronomical.

And that concern is only about how AI actually works. The political and economic impacts of AI could lead to a range of possible outcomes, from a post-rarity utopia to a feudal dystopia. It is also possible that the two extremes coexist. For example, if the wealth generated by AI is distributed throughout society, it could contribute to the utopian vision. However, we’ve seen that AI concentrates power, with a relatively small number of companies controlling the technology. The concentration of power paves the way for feudal dystopia.

Maybe less time than expected

The DeepMind article describes how AGI could be achieved. Getting there is still a long way off, 20 years to go, according to the estimate, although recent advancements suggest the timeline will be at the shorter end of that spectrum and possibly even sooner. I argued last year that OpenAI’s GPT-3 moved AI to a twilight zone, an area between narrow AI and general AI. GPT-3 is capable of many different tasks without additional training, able to produce compelling narratives, generate computer code, autocomplete images, translations between languages, and mathematical calculations, among other feats, some of which its creators had not planned. This apparent multifunctional ability doesn’t sound much like the definition of narrow AI. Indeed, its function is much more general.

Even so, today’s deep learning algorithms, including GPT-3, are unable to adapt to changing circumstances, a fundamental distinction that separates today’s AI from the AGI. A step towards adaptability is multimodal AI which combines GPT-3 language processing with other capabilities such as visual processing. For example, based on GPT-3, OpenAI introduced DALL-E, which generates images based on concepts it has learned. Using a simple text prompt, DALL-E can produce “a painting of a capybara sitting in a field at sunrise”. While he may never have “seen” an image of this before, he can combine what he has learned from paintings, capybaras, fields, and sunrises to produce dozens of images. Thus, it is multimodal and is more capable and general, although still not AGI.

Researchers at the Beijing Academy of Artificial Intelligence (BAAI) in China recently introduced Wu Dao 2.0, a multimodal AI system with 1.75 trillion parameters. That’s a little over a year after the introduction of GPT-3 and it’s an order of magnitude larger. Like GPT-3, the multimodal Wu Dao – which means “enlightenment” – can perform natural language processing, text generation, image recognition, and image generation tasks. But he can do it faster, arguably better, and can even sing.

Conventional wisdom dictates that achieving AGI is not necessarily a matter of increasing the computing power and number of parameters of a deep learning system. However, some believe that complexity gives birth to intelligence. Last year, University of Toronto professor Geoffrey Hinton, deep learning pioneer and Turing Prize winner, said, “There are a trillion synapses in a cubic centimeter of the brain. If there is general AI, [the system] would probably require a trillion synapses. Synapses are the biological equivalent of the parameters of the deep learning model.

Wu Dao 2.0 apparently hit that number. BAAI President Dr Zhang Hongjiang said upon the release of version 2.0: “The road to general artificial intelligence is through large models and [a] big computer. Just weeks after the release of Wu Dao 2.0, Google Brain announced a deep learning computer vision model containing two billion parameters. While it is not certain that the trend of recent gains in these areas will continue, there are models that suggest computers may have as much power as the human brain by 2025.

Source: Mother Jones

The increase in computing power and the maturation of models pave the way for AGI

Reinforcement learning algorithms attempt to mimic humans by learning how to best achieve a goal by seeking rewards. With AI models like Wu Dao 2.0 and computing power growing exponentially, reinforcement learning – machine learning by trial and error – could it be the technology that leads to AGI, like the thinks DeepMind?

The technique is already widely used and increasingly adopted. For example, self-driving car makers like Wayve and Waymo use reinforcement learning to develop control systems for their cars. The military is actively using reinforcement learning to develop collaborative multi-agent systems such as robot teams that could work side-by-side with future soldiers. McKinsey recently helped Emirates Team New Zealand prepare for the 2021 Americas Cup by creating a reinforcement learning system that could test any type of boat design in real and digitally simulated sailing conditions. This gave the team a performance advantage that helped them secure their fourth Cup victory.

Google recently used reinforcement learning on a dataset of 10,000 computer chip designs to develop its next generation TPU, a chip specially designed to accelerate the performance of AI applications. Work that once took a team of human design engineers several months can now be done by AI in less than six hours. So Google is using AI to design chips that can be used to create even more sophisticated AI systems, further accelerating already exponential performance gains through a virtuous circle of innovation.

While these examples are compelling, they remain limited use cases for AI. Where is the IGA? The DeepMind article states, “The reward is sufficient to drive behavior that exhibits abilities studied in natural and artificial intelligence, including knowledge, learning, perception, social intelligence, language, generalization and limitation. This means that AGI will flow naturally from reinforcement learning as the sophistication of models matures and computing power grows.

Not everyone agrees with DeepMind’s point of view, and some are already dismissing the document as a publicity stunt meant to keep the lab in the news rather than advancing science. Even so, if DeepMind is right, it is all the more important to instill ethical and responsible AI practices and standards across industry and government. With the rapid pace of acceleration and advancement in AI, we clearly cannot afford to take the risk that DeepMind is wrong.

Gary Grossman is Senior Vice President of Technology Practice at Edelman and Global Head of the Edelman AI Center of Excellence.

VentureBeat

VentureBeat’s mission is to be a digital public place for technical decision-makers to learn about transformative technology and conduct transactions. Our site provides essential information on data technologies and strategies to guide you in managing your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the topics that interest you
  • our newsletters
  • Closed thought leader content and discounted access to our popular events, such as Transform 2021: Learn more
  • networking features, and more

Become a member


[ad_2]

Previous The best tech stocks to buy this weekend
Next Connecting young people with adoptive grandmother, grandfather via technology

No Comment

Leave a reply

Your email address will not be published.