by Editor | Dec 8, 2019 | News
Professor Kanter spoke with professor Alex Sandy Pentland about rules and regulations of data for AIWS Social Contract 2020. She called for non-government organizations to build international solution, tools to protect rights of information, data of individuals, fake news issues, and hoped that AIWS Social Contract could solve this issue at AIWS Conference at Harvard University Faculty Club.
Rosabeth Moss Kanter holds the Ernest L. Arbuckle Professorship at Harvard Business School, specializing in strategy, innovation, and leadership for change. Her strategic and practical insights guide leaders worldwide through teaching, writing, and direct consultation to major corporations, governments, and start-up ventures. She co-founded the Harvard University-wide Advanced Leadership Initiative, guiding its planning from 2005 to its launch in 2008 and serving as Founding Chair and Director from 2008-2018 as it became a growing international model for a new stage of higher education preparing successful top leaders to apply their skills to national and global challenges. Author or co-author of 20 books, her latest book, to be published in January 2020, is Think Outside the Building: How Advanced Leaders Can Change the World One Smart Innovation at a Time.
by Editor | Dec 8, 2019 | News
The popularity of AI and ML have wide-reaching effects on your enterprise. Here are three important trends driven by AI to look out for next year.
The Rise of AutoML 2.0 Platforms
As the need for additional AI applications grows, businesses will need to invest in technologies that help them accelerate the data science process. However, implementing and optimizing machine learning models is only part of the data science challenge. In fact, the vast majority of the work that data scientists must perform is often associated with the tasks that preceded the selection and optimization of ML models such as feature engineering — the heart of data science.
The Shift to Automation Will Intensify Focus on Privacy and Regulations
As AI and ML models become easier to create using advanced “AutoML 2.0” platforms, data scientists and citizen data scientists will begin to scale ML and AI model production in record numbers. This means organizations will need to pay special attention to data collection, maintenance, and privacy oversight to ensure that the creation of new, sophisticated models does not violate privacy laws or cause privacy concerns for consumers.
More Citizen Data Scientists Doing Data Science
Big data will continue to be on the upsurge in 2020 with a growing demand for skilled data scientists and a continued shortage of data science talent — creating ongoing challenges for businesses implementing AI and ML initiatives. Although AutoML platforms have alleviated some of the pressure on data science teams, they have not resulted in the productivity gains organizations are seeking from their AI and ML initiatives. As such, companies need better solutions to help them leverage their data for business insights.
To support for AI technology and development for social impact, Michael Dukakis Institute for Leadership and Innovation (MDI) has established AI World Society (AIWS) to invite participation and collaboration with think tanks, universities, non-profits, firms, as well as start-up companies that share its commitment to the constructive and development of AI.
The original article can be found here.
by Editor | Dec 8, 2019 | News
The news: Customers in China who buy SIM cards or register new mobile-phone services must have their faces scanned under a new law that came into effect yesterday. China’s government says the new rule, which was passed into law back in September, will “protect the legitimate rights and interest of citizens in cyberspace.”
A controversial step: It can be seen as part of an ongoing push by China’s government to make sure that people use services on the internet under their real names, thus helping to reduce fraud and boost cybersecurity. On the other hand, it also looks like part of a drive to make sure every member of the population can be surveilled.
How do Chinese people feel about it? It’s hard to say for sure, given how strictly the press and social media are regulated, but there are hints of growing unease over the use of facial recognition technology within the country. From the outside, there has been a lot of concern over the role the technology will play in the controversial social credit system, and how it’s been used to suppress Uighur Muslims in the western region of Xinjiang.
Knock-on effect: How facial recognition plays out in China might have an impact on its use in other countries, too. Chinese tech firms are helping to create influential United Nations standards for the technology, The Financial Times reported yesterday. These standards will help shape rules on how facial recognition is used around the world, particularly in developing countries.
On the other hand, Michael Dukakis Institute for Leadership and Innovation (MDI) and Artificial Intelligence World Society (AIWS) has developed AIWS Ethics and Practice Index to measure the ethical values and help people achieve well-being and happiness, as well as solve important issues, such as SDGs. Thus, it is not recommended to use AI technology including facial recognition for citizen surveillance as well as regulated press and social media.
The original article can be found here.
by Editor | Dec 2, 2019 | News
On September 23, 2019, at AIWS Conference at Harvard University Faculty Club, Former President of Ecuador Jamil Mahuah discussed the AIWS Social Contract 2020.
President Jamil Mahuah is a candidate for the Nobel Peace Prize, and now is a visiting professor at Harvard University. He is very interested in the AIWS Social Contract 2020. He enthusiastically contributed ideas from the point of view of governments.
Some important questions need to solve for the AIWS Social Contract 2020 are:
- How can the Citizens, Civic Society, and AI Assistant enforce laws and aid in governmental decision-making.
- Civic Societies: how can civic society, citizens, and AI Assistant impact and effect decisions by governments and business to enforce common values and standards?
Discussion to build the AIWS Social Contract 2020 is continued by group contributors such as world leaders, distinguished thinkers, AIWS Young Leaders. The official version of the AIWS Social Contract 2020 will be launched on April 28, 2020 at Harvard University with participation of world leaders, distinguished thinkers, and AIWS Young Leaders.
by Editor | Dec 2, 2019 | News
As a great governor for three terms of Massachusetts, and wonderful success in public transportation, Governor Michael Dukakis, Co-founder and Chair of the Boston Global Forum, recently talked on Boston.com: “Let’s just get the damn public transportation system working well” and “The answer is clear in my judgment: Don’t spend any more money on highways.”
Governor Dukakis has been fighting highway development since the 1960s. He said “This city would have been paved over if we let these guys do what they wanted to do. And by the time I left office, we had the best public transportation system in the country.”
Governor Dukakis touted improvements to the MBTA during his time in office, including new cars and the Red Line extension to Alewife. But he said the transit system was neglected by subsequent administrations, amid projects like the Big Dig. Now, amid the region’s congestion crisis, Dukakis says a simple, renewed focus on the MBTA could good a long way toward increasing ridership and freeing up the roads.
The original article can be found here.
by Editor | Dec 2, 2019 | News
On December 12, 2019, AIWS Innovation Network will official launch at the BGF Global Cybersecurity Day Symposium at Loeb House, Harvard University.
One of components of AIWS Innovation Network is to monitor and judge governments and companies in respect to AIWS Social Contract 2020 norms. AIWS Young Leaders in many countries will volunteer in this important mission. The core team to monitor and judge is based in Boston, USA.
They are professors, thoughtleaders, activists, students such as Professor Christo Wilson, Activist Rebecca Leeper, etc. Governor Michael Dukakis, Co-founder and Chairman of the Boston Global Forum and world leaders, distinguished thinkers are mentors. After Japanese Minister of Defense Taro Kono give keynote speech, Mr. Yasuhide Nakayama, Former Foreign Affair Vice Minister will speak on “How AIWS Young Leaders convince governments, corporation respect AIWS Social Contract Norms” at this event.
by Editor | Dec 2, 2019 | Event Updates
Marc Rotenberg, President and Executive Director of the Electronic Privacy Information Center (EPIC) in Washington, D.C, will join the BGF Global Cybersecurity Day Symposium December 12, 2019 at Loeb House, Harvard University.
Marc is a member of the Michael Dukakis Institute’s AIWS Standards and Practice Committee. He also spoke at AIWS Conference on AI-Government and Treaty on AI Ethics and Practices in September 2018 at Harvard University Faculty Club.
He teaches at Georgetown Law and frequently testifies before Congress on emerging privacy and civil liberties issues. He has served on several national and international advisory panels. He has authored many amicus briefs for federal and state courts. He is a founding board member and former Chair of the Public Interest Registry, which manages the .ORG domain. He is editor of “The AI Policy Sourcebook” (EPIC 2019), “EPIC v. DOJ: The Mueller Report” (EPIC 2019), “The Privacy Law Sourcebook” (EPIC 2018), “Privacy in the Modern Age: The Search for Solutions” (The New Press 2015), and author (with Anita Allen) of “Privacy Law and Society” (West 2016). He currently serves on expert panels for the Aspen Institute, the National Academies of Science, and the OECD. He is on the editorial boards of the European Data Protection Law Review, the Journal of National Security Law and Policy, and Law 360 Cybersecurity and Privacy. He is a graduate of Harvard College and Stanford Law School and received an LLM in International and Comparative Law from Georgetown Law. He served as Counsel to Senator Patrick J. Leahy on the Senate Judiciary Committee after graduation from law school.
by Editor | Dec 2, 2019 | News
After speaking at an MIT conference on emerging AI technology earlier this year, I entered a lobby full of industry vendors and noticed an open doorway leading to tall grass and shrubbery recreating a slice of the African plains. I had stumbled onto TrailGuard AI, Intel’s flagship AI for Good project, which the chip company describes as an artificial intelligence solution to the crime of wildlife poaching. Walking through the faux flora and sounds of the savannah, I emerged in front of a digital screen displaying a choppy video of my trek. The AI system had detected my movements and captured digital photos of my face, framed by a rectangle with the label “poacher” highlighted in red.
I was handed a printout with my blurry image next to a picture of an elephant, along with text explaining that the TrailGuard AI camera alerts rangers to capture poachers before one of the 35,000 elephants each year are killed. Despite these good intentions, I couldn’t help but wonder: What if this happened to me in the wild? Would local authorities come to arrest me now that I had been labeled a criminal? How would I prove my innocence against the AI? Was the false positive a result of a tool like facial recognition, notoriously bad with darker skin tones, or was it something else about me? Is everyone a poacher in the eyes of Intel’s computer vision?
This is not to say tech companies should not work to serve the common good. With AI poised to impact much of our lives, they have more of a responsibility to do so. To start, companies and their partners need to move from good intentions to accountable actions that mitigate risk. They should be transparent about both benefits and harms these AI tools may have in the long run. According to this, Artificial Intelligence World Society (AIWS) has developed AIWS Ethics and Practice Index to measure the ethical values and improve transparency of AI applications in our human daily life.
The original article can be found here.
by Editor | Dec 2, 2019 | News
Throughout 2020, a wave of AI hardware startups will launch their companies and products. Cerebras started this wave with its wafer-scale engine last September. This week, Intel announced its AI chips from Nervana, Groq (founded by the inventors of Google TPU) announced its quadrillion ops per second TSP, and Graphcore announced that its chip is available on Microsoft Azure and Dell servers. Last week, a startup named “Blaize,” previously named “Thinci,” emerged from stealth, having already reached key milestones in four areas: innovative hardware, a comprehensive software stack, a staff of over 325 employees, and most importantly, 15 pilot projects underway in the USA, Europe and Asia.
Architectural innovation forms the core of every AI HW startup. Simply adding more multiply/accumulate registers or on-die memory will be inadequate for most high-performance applications. Blaize’s team built a general-purpose graph processor which can natively process graph-based applications, including, but not limited to the Deep Neural Networks which lie at the heart of most modern AI work. While the company claims this architecture can deliver massive gains in efficiency, we will need to await production-ready silicon next year to evaluate how well it performs against other engines that are coming to market.
To support for AI technology and development, Michael Dukakis Institute for Leadership and Innovation (MDI) has established AI World Society (AIWS) to invite participation and collaboration with think tanks, universities, non-profits, firms, as well as start-up companies that share its commitment to the constructive and development of AI.
The original article can be found here.