[ad_1]
It is true that much work has been done by the European Commission since President Ursula Von der Leyen and her team took office. Already promised in December 2019 was a “legislative proposal” on AI – what was delivered was an AI White Paper in February. While this, admittedly, is not a legislative proposal, it is a document that has kick-started the debate on human and ethical AI, the use of Big Data, and how these technologies can be used to create wealth for society and business.
The Commission’s White Paper emphasizes the importance of establishing a uniform approach to AI across the EU’s 27 member states, where different countries have started to take their own approach to regulation, and thus potentially, are erecting barriers to the EU’s single market. It also, importantly for Huawei, talks about plans to take a risk-based approach to regulating AI.
At Huawei we studied the White Paper with interest, and along with (more than 1,250!) other stakeholders, contributed to the Commission’s public consultation, which closed on 14 June, giving our input and ideas as experts working in this field.
Finding the balance
The main point that we emphasized to the Commission is the need to find the right balance between allowing innovation and ensuring adequate protection for citizens.
In particular, we focused on the need for high-risk applications to be regulated under a clear legal framework, and proposed ideas for what the definition of AI should be. In this regard, we believe the definition of AI should come down to its application, with risk assessments focusing on the intended use of the application and the type of impact resulting from the AI function. If there are detailed assessment lists and procedures in place for companies to make their own self-assessments, then this will reduce the cost of initial risk assessment – which must match sector-specific requirements.
We have recommended that the Commission looks into bringing together consumer organizations, academia, member states, and businesses to assess whether an AI system may qualify as high-risk. There is already an established body set up to deal with these kinds of things – the standing Technical Committee High Risk Systems (TCRAI). We believe this body could assess and evaluate AI systems against high-risk criteria both legally and technically. If this body took some control, combined with a voluntary labelling system, on offer would be a governance model that:
• Considers the entire supply chain;
• sets the right criteria and targets the intended goal of transparency for consumers/businesses;
• incentivizes the responsible development and deployment of AI, and;
• creates an ecosystem of trust.
Outside of the high-risk applications of AI, we have stated to the Commission that the existing legal framework based on fault-based and contractual liability is sufficient – even for state-of-the-art technologies like AI, where there could be a fear that new technology requires new rules. Extra regulation is however, unnecessary; it would be over-burdensome and discourage the adoption of AI.
From what we know of the current thinking within the Commission, it appears that it also plans to take a risk-based approach to regulating AI. Specifically, the Commission proposes focusing in the short-term on “high-risk” AI applications – meaning either high-risk sectors (like healthcare) or in high-risk use (for example whether it produces legal or similarly significant effects on the rights of an individual).
So, what happens next?
The Commission has a lot of work to do in getting through all the consultation responses, taking into account the needs of business, civil society, trade associations, NGOs and others. The additional burden of working through the coronavirus crisis has not helped matters, with the formal response from the Commission now not expected until Q1 2021.
Coronavirus has been a game-changer for technology use in healthcare of course, and will no doubt have an impact on the Commission’s thinking in this area. Terms such as “telemedicine” have been talked about for years, but the crisis has turned virtual consultations into reality – almost overnight.
Beyond healthcare we see AI deployment being continuously rolled out in areas such as farming and in the EU’s efforts to combat climate change. We are proud at Huawei to be part of this continuous digital development in Europe – a region in which and for which we have been working for 20 years. The development of digital skills is at the heart of this, which not only equips future generations with the tools to seize the potential of AI, but will also enable the current workforce to be active and agile in an ever-changing world: there is a need for an inclusive, lifelong learning-based and innovation-driven approach to AI education and training, to help people transition between jobs seamlessly. The job market has been heavily impacted by the crisis, and quick solutions are needed.
As we wait for the Commission’s formal response to the White Paper, what more is there to say about AI in Europe? Better healthcare, safer and cleaner transport, more efficient manufacturing, smart farming and cheaper and more sustainable energy sources: these are just a few of the benefits AI can bring to our societies, and to the EU as a whole. Huawei will work with EU policymakers and will strive to ensure the region gets the balance right: innovation combined with consumer protection.
[ad_2]
Source link