Where Does AI fit into Business and Society?
A series on AI, business, and society
The following series of posts will discuss the increasing integration of artificial intelligence (AI) into our everyday lives and business models. While the discussion around AI has largely been based in university classrooms and research labs, it’s undeniable that the AI explosion is affecting all aspects of our society. However, the actual implications have remained largely undiscussed until now.
That’s why we at KISSPatent are here to report the latest in tech trends. At the AI conference in New York, dozens of leaders in the tech industry gathered this past May to discuss trends in AI. To save you the trip to New York, we decided to report the key takeaways that may be of interest to you, as an entrepreneur or startup. Read on to find out what the industry’s leaders are thinking about in AI!
And, as always, if you have any questions, feel free to reach out to us directly and we’ll respond ASAP.
Humans vs. AI vs. Privacy: What does Europe think?
Our European friends have always appreciated and respected individual privacy rights when it comes to data collection from tech giants such as Google. In fact, they’ve valued it so much that they’ve passed sweeping legislation to provide additional privacy protections for European citizens—to the extent that they’ve changed the way that tech giants conduct business.
However, it’s not just tech giants who have access to private data. Remember the theme of our blog posts, AI?
AI requires access to large amounts of data, in order for the models to be trained. When this data becomes personal data, we start to see AI running into the issues we named above: mainly privacy and individual rights. The question remains: who will control access to our data, and who will determine how that data will be used?
Paul Nemitz of the European Commission gave Europe’s robust answer to these questions. All companies, including AI startups, will be required to respect individual privacy rights. In other words “the algorithm made me do it” isn’t going to work for tech companies – as Google found out in the right to be forgotten court case.
Nemitz argued that Europe is unique in its requirement for respecting individual privacy, compared to the US, where personal data is controlled by a tech oligopoly--and in China, it's controlled by the government. This has led Europe to enforce human rights against the needs of tech companies, particularly for AI. He emphasized that no AI algorithm can ever be used to do something that would otherwise be illegal.
The question of humans vs. AI vs. privacy remains unsolved and will continue to develop as AI evolves. However, it remains clear that Europe is determined to protect privacy rights first and foremost, and will enforce laws as it deems necessary.
AI Applications in the Real World: World Poverty
When assessing world poverty, there’s one measure that the World Bank prefers: price. But measuring prices around the world for an array of goods and services is not an easy task. Imagine trying to gather information on prices for groceries…or even going to the dentist. Now imagine trying to gather those prices in 162 countries.
Why are these prices so important? By measuring the prices accurately and robustly, the World Bank can determine Purchasing Power Parity (PPP), which (if you remember from your college economics class) measures the total amount of goods and services that a single unit of a country’s currency can buy in another country. Or, you may be more familiar with PPP through The Economist’s “Big Mac Index,” which is a version of the PPP that measures the purchase price of Big Macs internationally.
When measuring world poverty, the World Bank uses the PPP to create a comparison of poverty rates across countries, and can then make an assessment as to whether global poverty is rising or declining. Each country’s government statisticians are responsible for sampling the prices of baskets of goods and services, and then reports the information to the World Bank.
Now that we’ve done a quick economics review, you may be wondering—but where does tech come into the picture? As I noted at the beginning of the series, AI is integrated into nearly all aspects of business and society—and the World Bank’s efforts are no different.
Maurice Nsabimana (Statistician, World Bank)—a super impressive person whom I met during AI CON 2018—wants to involve “regular” people in the World Bank’s effort to gather the prices of goods and services. He accurately pointed out that mobile phones with cameras are internationally available—and in fact, more people have success to camera phones than have clean drinking water.
Maurice had an army of paid volunteers take photos of prices of a wide variety of goods and services – everything from produce in the supermarkets to doctors’ bills. He ended up with over a million photos to analyze. Intel stepped in with its BigDL project, to easily develop and use AI models for problems such as image classification (the subject of another post). With Intel’s help, the World Bank was able to analyze the photos to determine (a) the goods or services and (b) the prices.
The project is still ongoing, but it’s a fascinating use of AI to solve global problems that would normally require massive amounts of manual human intervention.
A faster way to jumpstart AI image recognition: BigDL
In our post above, I mentioned that the World Bank used AI image recognition software when implementing its camera phone project to better assess prices worldwide. The software that they used is called BigDL. In this post, I’ll give you a quick breakdown on what BigDL is, and why you should care.
BigDL is a free, distributed deep learning library from Intel. It runs natively on Apache Spark and enables data engineers and scientists to write deep learning applications in Scala or Python as standard Spark programs. BigDL does the entirety of heavy lifting for managed distributed computations.
Yulia Tell of Intel presented on the inner workings of BigDL at AI CON 2018. She emphasized the ease of use with BigDL—it’s basically as simple as writing a regular Scala or Python program. Because it’s so simple to use, it’s possible for users to easily scale an AI application—users don’t have to bother with the hassle of implementing the program at scale.
BigDL has a large amount of built-in support for AI, including neural networks. Users can load pre-trained Caffe or Torch models into Spark programs using BigDL, which is a great feature because it means that users can load their already-built models and run them. Users won’t have to start model building or training from scratch, saving them lots of effort.
BigDL was released as open source by Intel. As an open source project, BigDL has 50 contributors, half of which are from outside Intel. In addition, there’s a BigDL image available for free on the AWS Marketplace, to try out a ready-made installation.
BigDL is just one example of AI innovation that may seem like a niche project but has global implications and the ability to multiply into a worldwide movement for change.
The future shines brighter with AI
Personally, I’m a techno-optimist. I’m convinced that technology will lead to a better future for humanity, and that the only potential problem with technology is our inability to tap into our imaginations to innovate and find solutions to potential problems. In the flying cars vs. 140 characters debate, I’m definitely on the flying car's side.
I’m not the only techno-optimist out there. Prof. Manuela Veloso (below) of Carnegie Mellon is also optimistic about the future of technology. “AI and humans: better than just humans, better than just AI” was her closing statement at AI CON 2018, and it perfectly encapsulates why we continue to look to develop AI. Prof. Veloso has a good reason for her optimism, considering the ground-breaking nature of her work on human/AI interactions. Her group has developed “cobots”: autonomous robots that roam her office building and know how to ask humans for help when needed. Doesn’t that sound like a dream collaboration?
The cobots have no arms, so they can’t move objects that they’ve been requested to retrieve, or push elevator buttons. Instead, they ask nearby humans for help. If no humans help them, the cobots send an email to their human colleagues – the students and postdocs of Prof. Veloso’s group – with their location and problem. A human then rescues the cobot.
Prof. Veloso emphasized the need for cooperation between AI and humans in order for humans to receive the most benefits from AI. She calls these interactions “symbiotic autonomy.” In her view, without continual interactions between humans and AI, AI will not be able to fully benefit humans. AI certainly can’t replace humans, but can help humans with many tasks. By lending a helping hand, cobots and AI are able to improve human life—all through technology. So, why not be optimistic about the future of tech? It’s as bright as ever.
Onwards and upwards with AI on farms
Toto, I have a feeling that we’re not in Kansas anymore!
Actually, we could be in Kansas – or Iowa, or California, or any place where agriculture is vital to the region. AI, in conjunction with IoT (Internet of Things), has the potential to completely revolutionize agriculture on a global scale.
Jennifer Marsman of Microsoft presented their latest project to increase agricultural yields while reducing inputs of scarce resources such as water and pesticides.
The project, part of Microsoft’s “AI for Earth” initiative, looked at combining localized sensors with AI to tell farmers the state of their fields and crops. The sensors would detect which area of a particular field needed more resources, such as water or pesticides. When we hear “sensors,” we may immediately think about the cost implications, but interestingly enough, Jennifer Marsman was surprised to find that the cost of sensors wasn’t the most complex part of the equation—it was actually the cost of connectivity.
To solve the connectivity problem, Microsoft tried drones and balloons, but the most innovative potential solution is the use of “white space” in television signals. These are unused parts of TV signals which could provide long-distance connectivity solutions; as Jennifer put it, you could potentially connect to your home network from 12 miles out.
Jennifer also noted that “AI for Earth” is accepting grant proposals for anyone who has an interest in applying AI to these tough human problems, like world hunger. If you have an AI idea, this could be your chance to make a difference!
Amazon and its new toy
Okay, I’ll admit that I’m the first in line to find out what cool, new toys Amazon comes out with—whether that’s AWS to a “zero-line” grocery store. At AI CON 2018, Dan Mbanga of AWS showed off Amazon’s newest toy: a DeepLens camera. It’s the world’s first AI-enabled video camera.
The local hardware on the video camera has the ability to run deep learning algorithms, such as image analysis and object recognition. The actual training occurs in the cloud, which is then followed by downloading the already-trained model to the camera.
If you’re still struggling to conceptualize why you may want a DeepLens camera, Amazon’s website provides a fun example:
You can train it to recognize whether a food is or isn’t a hot dog.
Of course, there are more complex and business-relevant uses as well (for those of you who don’t operate hot dog restaurants). It can also be trained to recognize different activities, such as brushing your hair or drinking coffee.
Want to score one? AWS is accepting pre-orders now for mid-June delivery.
Is AI smarter (and better) than humans?
In the eternal man vs. machine debate, those who are cautious to not offend our fellow humans tend to take a middle-line stance on whether AI is better than humans. However, Zoubin Ghahramani of Uber and the University of Cambridge threw caution to the wind with his stance on man vs. machine. He argued that AI should evolve beyond its human-centric origins to focus on problem-solving in areas that humans are inherently bad.
One way to improve AI is by adding probabilistic measures to machine learning algorithms so that the models can calculate when they’re not sure how to complete a task. Currently, most ML algorithms are “all or nothing” algorithms – there’s no gray area or room for uncertainty, which doesn’t exactly emulate real life, which is filled with probabilistic outcomes.
You may think that if we simply increase the size of the data set, all “unknowns” will dissipate (central limit theorem for those who remember). But, that’s not exactly true. Even with a large data set, it’s almost impossible to avoid uncertainty in ML—especially in fringe cases that move far from the original dataset.
One example that Prof. Ghahramani gave was in image analysis and recognition. ML algorithms that could correctly recognize a car vs a dog suddenly became confused when the images had some white noise added – classifying both as an ostrich. No, I don’t know why it picked an ostrich either, but it does show that current ML algorithms can be gamed. Adding room for uncertainty would help prevent these errors to make models more accurate.
Side note: another cool mention was the Automated Statistician, a website that allows you to enlist the help of an AI data scientist to gain a better understanding of statistics and probability. Check it out here.
With the help of our statistician friends, we may see more accurate ML models that are able to account for the inevitable probabilities of everyday life!
Creative AI: Making songs just for you
I have a question for you: can computers create new art or only copy existing examples? The argument has been raging on since AI was created. For outsiders to the AI community, most people agree that AI can only copy art, and not create new works.
However, Amper Music has convinced me that AI can be creative—at least in the field of music composition. At AI CON 2018, I witnessed an exciting demo that proved that AI can make art. Amper Music’s AI asks you a few questions about your musical preferences—the genre, instruments, beat, etc.—and then creates a personalized song just for you. The songs are free for personal use, but you’ll need to pay $200 to use them commercially.
Amper Music recently received a $4 million funding round from venture capitalists who also seem to believe that AI can create new works of art. It’s a fascinating use of AI that could lead companies to finally move away from the same stock music in commercials and videos. To read more about Amper Music’s story, check out this TechCrunch article here.
Peering inside the body with AI
Imagine a doctor’s visit where the doctor simply looked inside of your body—no poking, prodding, or nasty needles. There’s some technology that enables medical professionals to do so (imaging technology), but it requires radiation at high doses which can be harmful to humans. The question remains: how can we look inside of the body non-invasively to attain the same benefits of imaging, but without the harmful side effects?
As you may have guessed from the theme of our blog posts, enter AI to the rescue! Prof. Greg Zaharchuk of Stanford University demonstrated AI that enables extremely reduced amounts of radiation to be used for imaging – while still receiving all of the benefits of imaging. When reducing the amount of radiation used in a procedure (whether externally or internally injected into the body), images can appear too noisy and the resolution too low. And indeed, Prof. Zaharchuk demonstrated that the initial images were quite noisy.
However, AI can be leveraged to clean up the images and even enhance them—to extent that the images become nearly indistinguishable from the high-resolution, high-radiation images. The implications for the medical field are massive—imaging procedures which previously had limited uses can suddenly be applied to a wide variety of medical situations. This breakthrough could mean that patient care is suddenly faster, cheaper, and more advanced than ever!
Can AI predict a disease before it affects you?
Many diseases that cause humanity the most problems aren’t “caught” any more – they’re chronic illnesses, like diabetes, or deadly internal killers like cancer. Early detection of such diseases, before they occur in their full- blown form, would be impactful for society. For chronic diseases,
the ability to monitor symptoms and (hopefully) the treatment’s effectiveness would also be impactful.
Detecting a medical condition based on a multiyear history of prescriptions filled by an individual provides an early warning system – and one that doesn’t require any extra tests or visits to the doctor. Julie Zhu and Dima Rakesh of Optum used deep learning to show that medical conditions can, in fact, be detected early and then monitored, based on prescription history.
They used a LSTM (long short term memory) neural network to analyze the prescriptions filled over a two year period by 4.52 million patients, and specifically focused on diabetes. By utilizing LSTM, they were able to predict the onset of diabetes before the disease was diagnosed. Furthermore, they were also able to show that by monitoring patients’ prescriptions, they could determine whether a patient was being treated effected—i.e, whether the diabetes was well-controlled.
AI also allows researchers to combine multiple variables, such as age, multi-year medical history, and gender to make more accurate predictions with regard to managing a patient’s disease. This expansive view of healthcare has the ability to change the course of patient care—and all thanks to AI!
Wondering if your idea is patentable? Have a question about this article? We can answer all of your questions — just hit "contact us" down below!