Netease Technology News January 20, according to foreign media reports, Google and its parent company Alphabet CEO Sundar Pichai personally wrote a few days ago, claiming that artificial intelligence (AI) is too important to be regulated People are very worried about the potential negative consequences of AI. Companies cannot just build new technology and let market forces decide how to use it.
The full text of Pichai’s article is as follows:
I grew up in India and am always obsessed with technology. Each new invention changed the lives of me and my family in meaningful ways. With the phone, we don’t have to travel long distances to the hospital to see the results.
The refrigerator means we can spend less time cooking, and the TV allows us to listen to shortwave radio while watching world news and cricket matches we can only imagine.
Now I am honored to help shape new technologies that we hope will change the lives of people around the world. Among them, the most promising technologies include AI: Just this month, three specific examples of how Alphabet and Google have tapped the potential of AI have been released.
Nature published our research showing that AI models can help doctors more accurately detect breast cancer during mammograms; as an important part of combating climate change, we are using AI to more quickly and accurately predict rainfall Make immediate predictions; Lufthansa is working with our cloud computing division to test the use of AI to help reduce flight delays.
However, there are many examples in history that the development of technology can also have unexpected negative effects. Internal combustion engines allow people to travel long distances, but they also cause more accidents.
The Internet makes it possible to connect with anyone and get information from anywhere, but it is also easier to spread misinformation. These lessons tell us that we need to be soberly aware of problems that may arise.
People do worry about the potential negative consequences of AI, from deepfakes to inhibiting the use of facial recognition technology. Although many companies have done a lot to address these concerns, it is inevitable that there will be more challenges in the future.
No single company or industry can deal with it alone. The European Union and the United States have begun developing regulatory proposals, and international coordination will be key to making global standards work.
To achieve this, we need to agree on core values. Companies like us cannot just create promising new technology and then let market forces decide how to use it. We also have a responsibility to ensure that technology is used to make it accessible to everyone. Now, in my opinion, AI needs to be regulated, which is beyond doubt. This technology is too important to be left alone.
The only question is how to treat it. That’s why Google released its own AI principles in 2018, designed to help guide the technology’s development and use in an ethical way. These guidelines help us avoid prejudice, rigorously test security, prioritize privacy first, and make technology accountable to people. These principles also specify areas where we will not design or deploy AI, such as supporting mass surveillance or human rights violations.
But the principle of staying on paper has no practical significance. Therefore, we have also developed tools to put it into practice, such as testing the fairness of AI decisions and conducting independent human rights assessments of new products.
We go even further and make these tools and related open-source code widely available, which will enable others to use AI in good faith. We believe that any company developing new AI tools should follow these guidelines and strict review procedures, and government regulation also needs to play an important role.
We don’t have to do it from scratch. Existing rules can serve as a solid foundation, such as the European General Data Protection Regulation.
A good regulatory framework will consider security, interpretability, fairness, and accountability to ensure that we develop the right tools in the right way. Sensible regulation must also take a commensurate approach, balancing potential hazards, especially in high-risk areas. Regulation can provide broad guidance while allowing targeted technology deployment in different sectors.
For the use of some AI technologies, such as regulated medical devices including AI-assisted heart monitors, the existing framework provides a good regulatory basis.
For newer areas such as autonomous vehicles, the government will need to establish appropriate new rules and consider all related costs and benefits. Google’s role begins with the recognition that applying AI requires a principled, regulated approach, but it doesn’t stop there.
We want to be a partner of regulators as they work to deal with the inevitable tensions and trade-offs. In tackling these issues together, we can provide our expertise, experience and tools.
AI has the potential to improve the lives of billions of people, and the biggest risk may be the failure to achieve this goal. By ensuring that AI is built in a responsible way that benefits everyone, we can inspire future generations to believe in the transformative power of technology as I do.