by BGF | Apr 22, 2019 | News
AI is set to aid the development of nuclear fusion reactors in a big way by predicting when major disruptions could halt reactions and damage the reactor.
While some claim that a stable nuclear fusion reactor capable of producing near-limitless, clean energy could be just over a decade away, researchers are still very much in the experimental stage of development. The biggest obstacle to achieving commercial energy production is controlling the intense, highly unstable plasma within the reactor with attempts so far only lasting a matter of minutes.
However, a team at the US Department of Energy’s Princeton Plasma Physics Laboratory (PPPL) has found a way to use the power of deep learning artificial intelligence (AI) to predict any disruptions that halt fusion reactions and damage doughnut-shaped tokamak reactors.

Image: © Korn V./Stock.adobe.com
“This research opens a promising new chapter in the effort to bring unlimited energy to Earth,” said Steve Cowley, director of PPPL, about the study published to Nature. “AI is exploding across the sciences and now it’s beginning to contribute to the worldwide quest for fusion power.”
Crucial to this new deep learning algorithm – called the Fusion Recurrent Neural Network (FRNN) – has been its access to 2TB of data provided by two major fusion facilities: the DIII-D National Fusion Facility in California and the Joint European Torus (JET) in the UK.
These facilities the largest in the US and the world, respectively, and the PPPL trained the AI system using these vast databases to reliably predict disruptions on other tokamaks.
‘Fusion science is very exciting’
Speaking of the importance of deep learning to the project, the team said it can achieve what other forms of AI can’t within nuclear fusion development.
For example, while non-deep learning software might consider the temperature of a plasma at a single point in time, the FRNN considers profiles of the temperature developing in time and space.
So far, FRNN is able to predict true disruptions within the 30 millisecond warning time frame required by the International Thermonuclear Experimental Reactor (ITER), an international nuclear fusion megaproject underway in France. However, it is closing in on the additional requirement of 95pc correct predictions with fewer than 3pc false alarms.
Bill Tang, co-author of the research and a principal investigator at PPPL, said: “AI is the most intriguing area of scientific growth right now, and to marry it to fusion science is very exciting.
“We’ve accelerated the ability to predict with high accuracy the most dangerous challenge to clean fusion energy.”
The next step in the research will be to move from prediction to the control of disruptions, but this will be quite the challenge.
“We will combine deep learning with basic, first-principle physics on high-performance computers to zero in on realistic control mechanisms in burning plasmas,” Tang said. “By control, one means knowing which ‘knobs to turn’ on a tokamak to change conditions to prevent disruptions. That’s in our sights and it’s where we are heading.”
by BGF | Apr 21, 2019 | News
Many countries are now competing to utilize AI, or artificial intelligence in the military sphere. But that may lead to a nightmare– a world where AI-powered weapons kill people based on their own judgement, and without any human intervention.
What are full autonomous lethal weapons?
Full autonomous lethal weapons powered by AI are now becoming the biggest issue as they are becoming an actual possibility with rapid advancement in technology.
It’s different from Armed UAVs(unmanned aerial vehicles) already deployed in actual warfare… the UAVs are remotely controlled by human, who make final decisions about where and when to attack.
On the other hand, autonomous AI weapons would be able to make decisions without human intervention.
It is estimated that at least 10 countries are developing AI-equipped weapons. The United States, China, and Russia, in particular, are engaging in fierce competition. They believe that AI will be key in determining which country will be better positioned than others. Concerns are growing that the competition could lead to a new phase of the arms race.

Ban Lethal Autonomous Weapons is an NGO trying to highlit the danger of the weapons.
A non-governmental organization calling for a ban on such weapons produced a video to demonstrate how dangerous these AI weapons could be.
The video shows a palm-sized drone that uses an AI-based facial recognition system to identify human targets and kill them by penetrating their skulls.
A swarm of micro-drones is released from a vehicle flying to a target school, killing young people one after another as they try to flee.
The NGO warns that AI-based weapons may be used as a tool in terrorist attacks, not just in armed conflicts between states.
This video is a complete fiction, but there are moves toward using swarms of such drones in actual military activities.
In 2016, the US Department of Defense tested a swarm of 103 AI-based micro-drones launched from fighter jets. Their flights were not programmed in advance. They flew in formation without colliding, using AI to assess the situation for collective decision-making.
Radar imagery shows a swarm of green dots — drones — flying together, creating circles and other shapes.

An arms maker in Russia developed an AI weapon in the shape of a small military vehicle and released its promotional video. It shows the weapon finding a human-shaped target and shooting it. The company says the weapon is autonomous.
The use of AI is also eyed to be applied to the command and control system. The idea behind it is to have AI help identify the most effective ways of troop deployment or attacks.
The United States and other countries developing AI arms technology say the use of fully autonomous weapons will avoid casualties of their service members. They also say it will also reduce human errors, such as bombing wrong targets.
Warnings from scientists
But many scientists disagree. They are calling for a ban on autonomous AI lethal weapons. Physicist Stephen Hawking, who died last year, was one of them.
Just before his death, he delivered a grave warning. What concerned him, he said, is that AI could start evolving on its own, and “in the future, AI could develop a will of its own, a will that is in conflict with ours.”
There are several issues concerning AI lethal weapons. One is an ethical problem. It goes without saying that humans killing humans is unforgivable. But the question here is whether robots should be allowed to make a decision on human lives.
Another concern is that AI could lower the hurdles to war for government leaders because it would reduce the war costs and loss of their own service men and women.
Proliferation of AI weapons to terrorists is also a grave issue. Compared with nuclear weapons, AI technology is far less costly and more easily available. If a dictator should get access to such weapons, they could be used in a massacre.
Finally, the biggest concern is that humans could lose control of them. AI devices are machines. And machines can go out of order or malfunction. They could also be subject to cyber-attacks.
As Hawking warned, AI could rise against humans. AI can quickly learn how to solve problems through deep learning based on massive data. Scientists say it could lead to decisions or actions that go beyond human comprehension or imagination.
In the board games of chess and go, AI has beaten human world champions with unexpected tactics. But why it employed those tactics remains unknown.
In the military field, AI might choose to use cruel means that humans would avoid, if it decides that would help to achieve a victory. That could lead to indiscriminate attacks on innocent civilians.
High hurdles for regulation
The global community is now working to create international rules to regulate autonomous lethal weapons.
Arms control experts are seeking to use the Convention on Certain Conventional Weapons, or CCW, as a framework for regulations. The treaty bans the use of landmines, among other weapons. Officials and experts from 120 CCW member countries have been discussing the issue in Geneva. They held their latest meeting in March.
They aim to impose regulations before specific weapons are created. Until now, arms ban treaties have been made after landmines and biological and chemical weapons were actually used and atrocities were committed. In the case of AI weapons, it would be too late to regulate them after fully autonomous lethal weapons come into existence.

International officials and experts are discussing regulating autonomous lethal weapons but haven’t reached a conclusion.
The talks have been continuing for more than five years. But delegates have even failed to agree on how to define “autonomous lethal weapons.”
Some voice pessimism that regulating AI weapons with a treaty is no longer viable. They say as talks stall, technology will make quick progress and weapons will be completed.
Sources say discussions in Geneva are moving toward creating regulations less strict than a treaty. The idea is for each country to pledge to abide by international humanitarian laws, then, create its own rules and disclose them to the public. It is hoped that this will act as a brake.
In February, the US Department of Defense released its first Artificial Intelligence Strategy report. It says AI weapons will be kept under human control, and be used without violating international laws and ethics.
But challenges remain. Some question whether countries will interpret international laws in their favor to make regulations that suit them. Others say it may be difficult to confirm that human control is functioning.
Humans have created various tools that are capable of indiscriminate massacres, such as nuclear weapons. Now, the birth of AI weapons that could be beyond human control are steering us toward an unknown dangerous domain.
Whether humans will be able to recognize the possible crisis and put a stop to it before it turns into a catastrophe is critical. It appears that human wisdom and ethics are now being tested.
by BGF | Apr 21, 2019 | News
Facebook is working on a voice assistant to rival the likes of Amazon’s Alexa, Apple’s Siri and the Google Assistant, according to several people familiar with the matter.
The tech company has been working on this new initiative since early 2018. The effort is coming out of the company’s augmented reality and virtual reality group, a division that works on hardware, including the company’s virtual reality Oculus headsets.
A team based out of Redmond, Washington, has been spearheading the effort to build the new AI assistant, according to two former Facebook employees who left the company in recent months. The effort is being lead by Ira Snyder, director of AR/VR and Facebook Assistant. That team has been contacting vendors in the smart speaker supply chain, according to two people familiar.
It’s unclear how exactly Facebook envisions people using the assistant, but it could potentially be used on the company’s Portal video chat smart speakers, the Oculus headsets or other future projects.

Mark Zuckerberg, CEO of Facebook.
Jaap Arriens | NurPhoto | Getty Images
The Facebook assistant faces stiff competition. Amazon and Google are far ahead in the smart speaker market with 67% and 30% shares in the U.S. in 2018, respectively, according to eMarketer.
In 2015, Facebook released an AI assistant for its Messenger app called M. It was supposed to help users with smart suggestions, but the project depended heavily on the help of humans and never gained traction. Facebook killed the project last year.
The company in November began selling its Portal video chat device, which lets users place video calls using Facebook Messenger. Users can say “Hey Portal” to initiative very simple commands, but the device also comes equipped with Amazon’s Alexa assistant to handle more complex tasks.
by BGF | Apr 19, 2019 | News
TEL AVIV, Israel, 16 April 2019–Israel’s reams of electronic medical records –health data on its population of around 8.9 million people– are proving fruitful for a growing number of digital health startups training algorithms to do things like early detection of diseases and produce more accurate medical diagnoses.
According to a new report by Start-Up Nation Central, the growth in the number of Israeli digital health startups –537 companies, up from 327 in 2014—has drawn in new investors, including Israeli VCs who have never previously invested in healthcare. This has driven financing in the sector to a record $511M in 2018, up 32% year on year. By the first quarter of 2019 the amount raised was already at $214M.
Of the $511M, over 50% ($285M) went to companies in decision support and diagnostics which rely heavily on data crunching. Overall , 85% ($433M) of the sector’s total financing went to health companies relying on some form of machine learning–a clear trend showing AI in the ascendancy. AI medical use cases include but are not limited to decision support tools for physicians, medical imaging analysis using computer vision, and big data analytics for population health management.

Also in 2018, new dedicated healthcare VCs were set up, and in early 2019 the largest ever venture capital fund raised in Israel, aMoon’s $660M fund, was earmarked for late stage health investment. Local and global hospital systems have started creating new joint ventures to test local startup technology, and the HMOs themselves are also establishing new innovation partnerships.
The electronic medical records have been gathered gradually over the past 25 years from the country’s 4 main health maintenance organizations (HMOs), allowing startups an increased ability to train and test artificial intelligence solutions, and partner with HMOs to validate their technology from early stages of development.
“With the combination of strong technological expertise and access to data, Israeli decision support companies, most of which utilize AI technologies, have been able to flourish, and have attracted increased levels of funding,” the report states.
Other important developments noted in the report:
An increase in Israeli investor activity in the sector: 124 investors invested in digital health companies in 2018 compared to 100 in 2017, with the growth coming mostly from a 66% increase in the involvement of Israeli investors, from 33 in 2017, to 55 in 2018. The data indicates increased local confidence in the sector, providing start-ups with support and expertise on the ground.
An increase in the number of later stage rounds: rising from 7 disclosed B and C+ rounds in 2017 to 12 in 2018. The combined capital raised in disclosed B and C rounds amounted to 50% of the total financing for the year compared with 30% in 2017. 17 investment rounds raised more than $10M each (80% of sector funding) in 2018 compared to 12 rounds in 2017 (68% of total funding). This indicates the sector’s maturation and availability of later stage funding, mostly from institutional investors.
Foreign hospitals and universities are increasingly coming to Israel to look for Digital Health technologies and to invest in local companies. For example, in 2018, three major US hospitals engaged with Israeli digital health: Intermountain Healthcare’s investment in Zebra Medical, Mt. Sinai Ventures’ contract with digital speech therapy company Novotalk, and Thomas Jefferson University’s pilot validation program in conjunction with the Israeli Innovation Authority for clinical care and hospital operations solutions.
Start-Up Nation Central’s report on Israel’s Digital Health industry offers a comprehensive and up-to-date analysis of the state of the Israeli Digital Health ecosystem and its trends.
by BGF | Apr 19, 2019 | News
For Chinese bureaucrats, getting a promotion isn’t just tied to their performance on the job — it’s increasingly about how well they behave in their leisure time.
For Chinese bureaucrats, getting a promotion isn’t just tied to their performance on the job — it’s increasingly about how well they behave in their leisure time.
Last month, the southeast city of Quanzhou became the latest to start rating civil servants’ personal behavior. Earlier Wenzhou — a commercial hub in the east — began equally weighting behavior at work and at home for promotions and other rewards. The coastal city Zhoushan also keeps files on the so-called social credit of public servants to assess them.
China is increasing pressure on its public servants, who are constantly expected to prove their loyalty to President Xi Jinping and the party. A new emphasis on personal behavior being rewarded over competence and ability is leaving bureaucrats disillusioned, as Xi curbs dissent and tightens his grip on power.

A member of the Armed Police stands guard under red flags at the Tiananmen Square in Beijing, China on Thursday, March 02 2017. Photographer: Qilai Shen/Bloomberg
The government is ramping up efforts to stamp out corruption among public servants and dissuade them from taking advantage of their positions and influence, amid an ambitious government plan to build a nationwide social credit system by 2020 to assign lifelong scores to citizens based on their behavior.
In January, China’s top judge Zhou Qiang vowed to strengthen rules barring government employees who defied court orders from making investments and holding certain jobs.
China’s already monitoring civil servants through a number of avenues, including a mobile app used to test Communist Party members’ loyalty to the party. Millions of citizens have downloaded the program to score points with the government.
‘Dishonest Records’
China’s State Council listed public servants as a crucial test group for building personal credit systems in a 2016 document that was later adopted by 20 provinces. The council separately ordered all court decisions, penalties or disciplinary actions taken against civil servants be recorded in a system of “dishonest records” to be collated on a national platform naming and shaming individuals.
Public servants who end up on the list will face consequences in their performance reviews and when being considered for promotions. Provinces and cities across the country have since formulated local versions of the plan.
The advances made by Wenzhou to monitor government workers parallels others in social credit pilot programs across China. The city’s authorities said in August public servants who defied or obstructed local court orders would face disciplinary action and have their wages withheld. In February, the courts teamed up with 41 government departments to share government employees’ social credit information.