Machine Learning
September 27, 2016
Four miles north of Los Altos is the Googleplex. Seven more miles brings you to Facebook’s headquarters, and eight miles south is the Apple Campus — welcome to the Silicon Valley. Mall cop robots are commonplace, and we see self-driving cars everywhere. While the rest of the world views such cars as incredible feats of technology, many in the area see them as less of a feat and more of a nuisance to drive behind when in a rush. Self-driving cars are not only one of many science-fiction-esque technologies that we take for granted but also perhaps the most widely known application of machine learning. Machine learning is a subfield of artificial intelligence (AI) that allows computers to make decisions without being explicitly programmed. It uses pattern recognition to learn from large sets of data and apply “knowledge” to new tasks based on previous situations. Machine learning is versatile and applicable to a variety of subjects and professions. It is the Silicon Valley’s newest tech craze, and it should not be dismissed as a mere nuisance.
Now Hiring
The world of machine learning is quickly developing, and with this expansion comes a growing demand for engineers. Those who answer this call will face a set of challenges that differ considerably from those of a typical software engineer.
“Machine learning engineering is a little bit different,” Apple Senior Director of Machine Learning Gaurav Kapoor said. “First, you think about the problem you are trying to solve. The second step is asking, ‘What data do I need to collect to solve the problem?’ In deterministic programming you don’t have any data collection.”
Machine learning relies heavily upon a foundation of statistical analysis. Entrance into the field is thus facilitated by a strong foundation in math, and many machine learning engineers have backgrounds in statistics in addition to computer science.
Engineers must be able to understand and write algorithms that effectively deal with enormous amounts of data and input, which are expedited with mathematical experience.
David Andrzejek is the Vice President of the Vertical Solutions team for Apigee, a company that creates Application Program Interfaces (APIs) that allow large companies to safely and efficiently manage interactions between thousands of users and servers.
“Companies look to machine learning to understand large amounts of customer data,” Andrzejek said. “So being able to find a way to transcribe that data into something that they can understand is something that employers really look for in a potential employee.”
Machine learning has revolutionized data science because machines can now plow through enormous banks of information to identify patterns at a rate no human could possibly attempt. For large corporations with millions of consumers, this ability to scale is essential.
This scalability can be applied in digital security. Apigee Chief Scientist Joy Thomas works to devise security algorithms using machine learning, specifically for Walgreens’ phone app.
“Between the backend [of Walgreens’ servers] and the app, there is a layer of software that Apigee makes that allows Walgreens to manage all the millions of apps that are running,” Thomas said. “There’s a lot of data that flows through [these systems,] and that data… can be used for insights in terms of improving the performance of the applications, and also… to recommend [Walgreens products] related to the customers’ [needs].”
However, the sensitive data that is carried through these systems is also useful to others: competitors, hackers and cybercriminals. Automated machines send large numbers of messages to the servers to collect data on the company’s sales, guess users’ usernames and passwords and crash the website. As hackers develop ways to make these malicious bots act more human in order to trick the system, Thomas and his colleagues must devise ways to counter their tricks.
“Given a set of samples of data which are labeled as [good and bad], the computer goes through the data and finds out what patterns distinguish the good from the bad, and [builds] up a system that can, given new data, be able to classify it,” Thomas said. “Security is a problem that’s still very difficult because every time you discover something, people on the other side change their behavior to hide themselves from the system.”
Security APIs are pragmatic and highly dynamic, but they are not the only useful application of machine learning. As new machine learning techniques are developed and refined, potential uses for the technology are branching in a multitude of innovative directions. New avenues for data collection are improving the way machines identify patterns in statistical data, and this ability to discover significant trends bears great potential for both the companies that drive the economy as well as the economists who analyze it.
IBM is developing Watson, a machine that uses data analysis and language skills to answer complex questions. Google is working on machines that mimic the way human minds play games and make music, and countless other companies have adapted machine learning to various applications in healthcare, imaging and beyond.
“The future of machine learning is huge,” Andrzejek said. “You see teachers, scientists, mathematicians, statisticians, [who are] all leaving their fields to be in machine learning. Machine learning is constantly evolving with the advent of new technology and so are the careers along with it. While the future is certainly murky, it [is] a very progressive and promising field. The demand is high, the pay is good and the job is ever-evolving. Those with a strong affinity for mathematics and science should definitely look into machine learning as a career.”
What Sci-fi Gets Right
Science fiction films are saturated with references to artificial intelligence (AI), often in the form of robots capable of developing their own thoughts, feelings and dreams. Cyborgs have staked their claim on the big screen, but how do Hollywood’s creations compare to the reality of machine learning?
“Metropolis,” a 1927 science-fiction drama, has been acclaimed as the first movie to explore the relationship between man and machine. The German film portrays the discrepancy between the social classes of an urban city. The AI featured in it, named the “false Maria,” wreaks havoc throughout the city in order to further separate the working and upper classes.
This film portrays one of many examples of the power-hungry, emotionless robot trope that Hollywood has reused for years. Another depiction is HAL 9000 from “2001: A Space Odyssey,” a murderous AI who kills many of the crew members on his spaceship due to their plans to deactivate him.
In reality, the chances of AI “going rogue” and gaining sentience, as they do in many of these films, are very slim. Many science-fiction movies like to play with the idea that at any point, an AI is capable of becoming conscious of its own existence and making decisions of its own accord. However, machine learning is actually characterized by the process of developing incredibly complex, deliberate code that allows the program to mimic human decisions. Self-awareness is too abstract for any modern computer to attain.
Regardless of the inaccuracies in many Hollywood portrayals of artificial intelligence, certain film representations of AI are surprisingly reasonable.
In the Avengers Universe, genius and billionaire Tony Stark, or Iron Man, creates an artificial intelligence that is capable of managing his house, interpreting voice commands and manning his iron suits. Though J.A.R.V.I.S, or Just A Rather Very Intelligent System, is purely fictional, it is much more attainable than false Maria or HAL. J.A.R.V.I.S is a prime example of the sophistication that could be reached in the future. Right now, some commonly advertised AI technology already includes voice commands and home management. A company called Nest has created a smart thermostat that is capable of collecting data through household supervision. It monitors daily household activity, such as a homeowner’s schedule and preferred temperatures. That information is used to develop a schedule for maintaining the temperature throughout a house.
J.A.R.V.I.S may have access to exclusive hardware (Iron Man’s suits), but it is not an entirely unrealistic portrayal of the potential future of machine learning.
If we disregard the somewhat far-fetched nature of many of these movies, several contain legitimate, albeit far-reaching, goals for those who are looking to develop AI. Even though we may someday have computers capable of thinking autonomously, it will likely not be in the near future. So unfortunately, sci-fi fans will have to continue waiting for their robot companions — and overlords — for as long as it takes.
Future of Machine Learning
Machine learning is a versatile field with a bright future. Engineers in the field are constantly innovating and creating new technologies that could one day be fully incorporated into people’s lives. Kitchens, labs and hospitals will soon be filled with machine learning technology.
June Intelligent Oven
Just in time for the holidays, a major advancement in home technology will be available for shipping all over the United States. June Intelligent Oven is the first breakthrough in kitchen appliances since the invention of the microwave in the 1970s, and it uses machine learning through image recognition.
This smart oven can identify 15 common foods, including cookie dough and bacon, with built-in HD cameras that can withstand temperatures up to 500 degrees and cutting edge image recognition technology. After its release, June will be continually updated to recognize a larger variety of foods.
After June identifies what food is about to be cooked, the food is weighed, and a cooking plan is recommended. Two core temperature probes ensure that foods like steak are cooked to the desired temperature.
Co-founder of June, Nikhil Bhogal, designed the camera software for the first five generations of iPhone, and members of the June team have previously worked on the Apple Watch, GoPro and FitBit. This qualified team utilized the same machine learning algorithms that Google Images uses.
Biotechnology and Cancer diagnosis
As scientists uncover more layers of complexity in the natural world, machine learning may be the only technology strong enough to comprehend the expanding field of biology. Cancer diagnosis and treatment, two of the most pertinent tasks in modern biology and medicine, will likely improve thanks to the abilities of artificial intelligence.
Sophia Genetics, a 5-year-old company paving the way for high-tech genomic research, uses machine learning algorithms to analyze thousands of different genomes to diagnose certain types of cancer. The software is also able to advise medical treatment for the cancer based on the genetic mutation that causes it.
According to Anna Domanska of “Industry Leaders Magazine,” this type of technology may soon be able to compare one individual’s cancer to that of a previous patient, give the survival rate of those with similar conditions and recommend the most effective treatment plan. In as soon as two years, machine learning may remodel the way we diagnose cancer.
Ellie the Robot Therapist
Machine learning isn’t just for the lab. At the Institute for Creative Technologies at USC, researchers have been developing a “robot psychologist” to surpass the duties of a human psychotherapist.
Named “Ellie,” this computer uses machine learning software to process information from the patient’s behavior and determine its own reaction. A webcam and microphone enable it to comprehend facial expressions and rate of speech, as well as the length of the pause the client takes before answering a question. This information is processed through a series of algorithms that tell it when to nod or ask a specific question.
As of now, Ellie is used only for research and not medical practice, but recent results prove it could be useful in the future because of its non-human makeup. According to a 2014 study by researcher Jonathan Gratch and his staff, patients who undergo “treatment” from Ellie are more likely to open up if they are aware that it is a computer — only a computer, with no human operator. Patients who are told they are talking to a human-controlled robot will be more closed off and focus more on what they say, whereas the distance of a guaranteed robot encourages more truthful results.
According to “The Economist,” this type of patient reaction could be useful for war veterans suffering from PTSD. Because many ex-soldiers are reluctant or opposed to seeking therapy for mental illness, Ellie may be a possible non-human emotional outlet that is capable of advising treatment plans.
Conclusion
Machine learning is a field that has inspired Hollywood producers, statisticians and engineers alike. From smart ovens to self-driving cars, the field has piqued public interest in recent times. Silicon Valley, arguably the forefront of technological progress, is often introduced to advancements earlier than other parts of the nation. As citizens of this incredible microcosm of innovation, we should be more appreciative of these changes that will affect what the world will look like years from now. We are granted a front row seat to witness the new, developing industry of machine learning blossom into a more mature field; not many other communities can say the same. It is important to recognize the accessibility of living in the Silicon Valley and to fully embrace the opportunities and culture of innovation around us. Machine learning is one of many novel fields developing in the tech-savvy neighborhood we live in, and we should not take this for granted.