2023人工智能信任全球洞察(英)-KPMG-2023.05-82页-WN6.pdf

上传人:530650****qq.com 文档编号:95792582 上传时间:2023-08-31 格式:PDF 页数:82 大小:2.73MB
返回 下载 相关 举报
2023人工智能信任全球洞察(英)-KPMG-2023.05-82页-WN6.pdf_第1页
第1页 / 共82页
2023人工智能信任全球洞察(英)-KPMG-2023.05-82页-WN6.pdf_第2页
第2页 / 共82页
点击查看更多>>
资源描述

《2023人工智能信任全球洞察(英)-KPMG-2023.05-82页-WN6.pdf》由会员分享,可在线阅读,更多相关《2023人工智能信任全球洞察(英)-KPMG-2023.05-82页-WN6.pdf(82页珍藏版)》请在taowenge.com淘文阁网|工程机械CAD图纸|机械工程制图|CAD装配图下载|SolidWorks_CaTia_CAD_UG_PROE_设计图分享下载上搜索。

1、Trust in ArtificialIntelligenceA global study2023KPMG.com.auuq.edu.auCitationGillespie,N.,Lockey,S.,Curtis,C.,Pool,J.,&Akbari,A.(2023).Trust in Artificial Intelligence:A Global Study.The University of Queensland and KPMG Australia.doi:10.14264/00d3c94University of Queensland Researchers Professor Ni

2、cole Gillespie,Dr Steve Lockey,Dr Caitlin Curtis and Dr Javad Pool.The University of Queensland team led the design,conduct,analysis and reporting of this research.KPMG AdvisorsJames Mabbott,Rita Fentener van Vlissingen,Jessica Wyndham,and Richard Boele.AcknowledgementsWe are grateful for the insigh

3、tful input,expertise and feedback on this research provided by Dr Ali Akbari,Dr Ian Opperman,Rossana Bianchi,Professor Shazia Sadiq,Mike Richmond,and Dr Morteza Namvar,and members of the Trust,Ethics and Governance Alliance at The University of Queensland,particularly Dr Natalie Smith,Associate Prof

4、essor Martin Edwards,Dr Shannon Colville and Alex Macdade.FundingThis research was supported by an Australian Government Research Support Package grant provided to The University of Queensland AI Collaboratory,and by the KPMG Chair in Trust grant(ID 2018001776).Acknowledgement of CountryThe Universi

5、ty of Queensland(UQ)acknowledges the Traditional Owners and their custodianship of the lands.We pay our respects to their Ancestors and their descendants,who continue cultural and spiritual connections to Country.We recognise their valuable contributions to Australian and global society.2023 The Uni

6、versity of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.2023 KPMG,an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited,a private English company limited by guarantee.All rights reserved.The KPMG

7、name and logo are trademarks used under license by the independent memberfirms of the KPMG global organisation.Liability limited by a scheme approved under Professional Standards Legislation.ContentsExecutive summary 02Introduction 07How we conducted the research 081.To what extent do people trust A

8、I systems?112.How do people perceive the benefits and risks of AI?223.Who is trusted to develop,use and govern AI?294.What do people expect of the management,governance and regulation of AI?345.How do people feel about AI at work?436.How well do people understand AI?537.What are the key drivers of t

9、rust in and acceptance of AI?608.How have trust and attitudes towards AI changed over time?66Conclusion and implications 70Appendix 1:Method and statistical notes 73Appendix 2:Country samples 75Appendix 3:Key indicators for each country 77 2023 The University of Queensland ABN:63 942 912 684 CRICOS

10、Provider No:00025B.2023 KPMG,an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited,a private English company limited by guarantee.All rights reserved.The KPMG name and logo are trademarks used under license

11、by the independent memberfirms of the KPMG global organisation.Liability limited by a scheme approved under Professional Standards Legislation.Executive summaryArtificial Intelligence(AI)has become a ubiquitous part of everyday life and work.AI is enabling rapid innovation that is transforming the w

12、ay work is done and how services are delivered.For example,generative AI tools such as ChatGPT are having a profound impact.Given the many potential and realised benefits for people,organisations and society,investment in AI continues to grow across all sectors1,with organisations leveraging AI capa

13、bilities to improve predictions,optimise products and services,augment innovation,enhance productivity and efficiency,and lower costs,amongst other beneficial applications.However,the use of AI also poses risks and challenges,raising concerns about whether AI systems(inclusive of data,algorithms and

14、 applications)are worthy of trust.These concerns have been fuelled by high profile cases of AI use that were biased,discriminatory,manipulative,unlawful,or violated human rights.Realising the benefits AI offers and the return on investment in these technologies requires maintaining the publics trust

15、:people need to be confident AI is being developed and used in a responsible and trustworthy manner.Sustained acceptance and adoption of AI in society are founded on this trust.This research is the first to take a deep dive examination into the publics trust and attitudes towards the use of AI,and e

16、xpectations of the management and governance of AI across the globe.We surveyed over 17,000 people from 17 countries covering all global regions:Australia,Brazil,Canada,China,Estonia,Finland,France,Germany,India,Israel,Japan,the Netherlands,Singapore,South Africa,South Korea,the United Kingdom(UK),a

17、nd the United States of America(USA).These countries are leaders in AI activity and readiness within their region.Each country sample is nationally representative of the population based on age,gender,and regional distribution.We asked survey respondents about trust and attitudes towards AI systems

18、in general,as well as AI use in the context of four application domains where AI is rapidly being deployed and likely to impact many people:in healthcare,public safety and security,human resources and consumer recommender applications.The research provides comprehensive,timely,global insights into t

19、he publics trust and acceptance of AI systems,including who is trusted to develop,use and govern AI,the perceived benefits and risks of AI use,community expectations of the development,regulation and governance of AI,and how organisations can support trust in their AI use.It also sheds light on how

20、people feel about the use of AI at work,current understanding and awareness of AI,and the key drivers of trust in AI systems.We also explore changes in trust and attitudes to AI over time.Next,we summarise the key findings.2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025

21、B.2023 KPMG,an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited,a private English company limited by guarantee.All rights reserved.The KPMG name and logo are trademarks used under license by the independen

22、t memberfirms of the KPMG global organisation.Liability limited by a scheme approved under Professional Standards Legislation.TRUST IN ARTIFICIAL INTELLIGENCE2Most people are wary about trusting AI systems and have low or moderate acceptance of AI:however,trust and acceptance depend on the AI applic

23、ationAcross countries,three out of five people(61%)are wary about trusting AI systems,reporting either ambivalence or an unwillingness to trust.Trust is particularly low in Finland and Japan,where less than a quarter of people report trusting AI.In contrast,people in the emerging economies of Brazil

24、,India,China and South Africa(BICS2)have the highest levels of trust,with the majority of people trusting AI systems.People have more faith in AI systems to produce accurate and reliable output and provide helpful services,and are more sceptical about the safety,security and fairness of AI systems a

25、nd the extent to which they uphold privacy rights.Trust in AI systems is contextual and depends on the specific application or use case.Of the applications we examined,people are generally less trusting and accepting of AI use in human resources(i.e.for aiding hiring and promotion decisions),and mor

26、e trusting of AI use in healthcare(i.e.for aiding medical diagnosis and treatment)where there is a direct benefit to them.People are generally more willing to rely on,than share information with AI systems,particularly recommender systems(i.e.for personalising news,social media,and product recommend

27、ations)and security applications (i.e.for aiding public safety and security decisions).Many people feel ambivalent about the use of AI,reporting optimism or excitement on the one hand,while simultaneously reporting worry or fear.Overall,two-thirds of people feel optimistic about the use of AI,while

28、about half feel worried.While optimism and excitement are dominant emotions in many countries,particularly the BICS countries,fear and worry are dominant emotions for people in Australia,Canada,France,and Japan,with people in France the most fearful,worried,and outraged about AI.People recognise the

29、 many benefits of AI,but only half believe the benefits outweigh the risksPeoples wariness and ambivalence towards AI can be partly explained by their mixed views of the benefits and risks.Most people(85%)believe AI results in a range of benefits,and think that process benefits such as improved effi

30、ciency,innovation,effectiveness,resource utilisation and reduced costs,are greater than the people benefits of enhancing decision-making and improving outcomes for people.However,on average,only one in two people believe the benefits of AI outweigh the risks.People in the western countries and Japan

31、 are particularly unconvinced that the benefits outweigh the risks.In contrast,the majority of people in the BICS countries and Singapore believe the benefits outweigh the risks.People perceive the risks of AI in a similar way across countries,with cybersecurity rated as the top risk globally While

32、there are differences in how the AI benefit-risk ratio is viewed,there is considerable consistency across countries in the way the risks of AI are perceived.Just under three-quarters(73%)of people across the globe report feeling concerned about the potential risks of AI.These risks include cybersecu

33、rity and privacy breaches,manipulation and harmful use,loss of jobs and deskilling,system failure,the erosion of human rights,and inaccurate or biased outcomes.In all countries,people rated cybersecurity risks as their top one or two concerns,and bias as the lowest concern.Job loss due to automation

34、 is also a top concern in India and South Africa,and system failure ranks as a top concern in Japan,potentially reflecting their relative heavy dependence on smart technology.These findings reinforce the critical importance of protecting peoples data and privacy to secure and preserve trust,and supp

35、orting global approaches and international standards for managing and mitigating AI risks across countries.There is strong global endorsement for the principles of trustworthy AI:trust is contingent on upholding and assuring these principles are in placeOur findings reveal strong global public suppo

36、rt for the principles and related practices organisations deploying AI systems are expected to uphold in order to be trusted.Each of the Trustworthy AI principles originally proposed by the European Commission3 are viewed as highly important for trust across all 17 countries,with data privacy,securi

37、ty and governance viewed as most important in all countries.This demonstrates that people expect organisations deploying AI systems to uphold high standards of:data privacy,security and governance technical performance,accuracy and robustness fairness,non-discrimination and diversity human agency an

38、d oversight transparency and explainability accountability and contestability risk and impact mitigation AI literacy support 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.2023 KPMG,an Australian partnership and a member firm of the KPMG global organisation of indepen

39、dent member firms affiliated with KPMG International Limited,a private English company limited by guarantee.All rights reserved.The KPMG name and logo are trademarks used under license by the independent memberfirms of the KPMG global organisation.Liability limited by a scheme approved under Profess

40、ional Standards Legislation.3TRUST IN ARTIFICIAL INTELLIGENCEPeople expect these principles to be in place for each of the AI use applications we examined(e.g.,Human Resources,Healthcare,Security,Recommender,and AI systems in general),suggesting their universal application.This strong public endorse

41、ment provides a blueprint for developing and using AI in a way that supports trust across the globe.Organisations can directly build trust and consumer willingness to use AI systems by supporting and implementing assurance mechanisms that help people feel confident these principles are being upheld.

42、Three out of four people would be more willing to trust an AI system when assurance mechanisms are in place that signal ethical and responsible use,such as monitoring system accuracy and reliability,independent AI ethics reviews,AI ethics certifications,adhering to standards,and AI codes of conduct.

43、These mechanisms are particularly important given the current reliance on industry regulation and governance in many jurisdictions.People are most confident in universities and defence organisations to develop,use and govern AI and least confident in government and commercial organisationsPeople hav

44、e the most confidence in their national universities and research institutions,as well as their defence organisations,to develop,use and govern AI in the best interest of the public(7682%confident).In contrast,they have the least confidence in governments and commercial organisations to do this.A th

45、ird of people lack confidence in government and commercial organisations to develop,use and regulate AI.This is problematic given the increasing scope with which governments and commercial organisations are using AI,and the publics expectation that these entities will responsibly govern and regulate

46、 its use.An implication is that government and business can partner with more trusted entities in the use and governance of AI.There are significant differences across countries in peoples trust of their government to use and govern AI,with about half of people lacking confidence in their government

47、 in South Africa,Japan,the UK and the USA,whereas the majority in China,India and Singapore have high confidence in their government.This pattern mirrors peoples general trust in their governments:we found a strong association between peoples general trust in government,commercial organisations and

48、other institutions and their confidence in these entities to use and govern AI.These findings suggest that taking action to strengthen trust in institutions generally is an important foundation for trust in specific AI activities.People expect AI to be regulated with some form of external,independen

49、t oversight,but view current regulations and safeguards as inadequateThe large majority of people(71%)expect AI to be regulated.With the exception of India,the majority in all other countries see regulation as necessary.This finding corroborates prior surveys4 indicating strong desire for regulation

50、 of AI and is not surprising given most people(61%)believe the long-term impact of AI on society is uncertain and unpredictable.People are broadly supportive of multiple forms of regulation,including regulation by government and existing regulators,a dedicated independent AI regulator,and co-regulat

展开阅读全文
相关资源
相关搜索

当前位置:首页 > 研究报告 > 可研报告

本站为文档C TO C交易模式,本站只提供存储空间、用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。本站仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知淘文阁网,我们立即给予删除!客服QQ:136780468 微信:18945177775 电话:18904686070

工信部备案号:黑ICP备15003705号© 2020-2023 www.taowenge.com 淘文阁