Everything You Need to Know about Artificial Intelligence (AI)

Stephen Hawking once told the BBC that “the development of full Artificial Intelligence could spell the end of the human race.” Is there truth behind such hyperbole? Not according to a White House statement from 2016, which claimed that the next couple of decades won’t necessarily see machines “exhibit broadly-applicable intelligence comparable to or exceeding that of humans.”

The report, however, did go on to say that “machines will reach and exceed human performance on more and more tasks,” meaning, if it isn’t properly governed, Artificial Intelligence may relegate humans to the bottom of the decision-making ladder.

So, should we be afraid of AI? Are we capable of regulating big data to keep it under control? What is AI in the first place and what separates fact from fiction?

Let’s dive right in to find out.

What Is Artificial Intelligence and What Does It Mean? 

More than two thousand years before computers could play chess and cars could drive themselves, the ancient Greeks told tales of Talos, the giant bronze robot whom Zeus assigned to protect his lover, Europa, who resided on the island of Crete.

Talos was tasked with patrolling the island three times a day, warding pirates off with rocks until he was destroyed by the Dioscuri twins during a raid, thus presaging Britannica’s simple definition of AI as “the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.” 

The term is often attributed to the development of systems endowed with the intellectual properties characteristic of humans, such as the “ability to reason, discover meaning, generalise or learn from past experience.” 

But just how far along the learning curve has AI come and what more can it accomplish?

The Evolution of Artificial Intelligence

In 1950, British polymath Alan Turing hypothesised that “the human brain is, in large, part of a digital computing machine.”

He claimed that the brain’s cortex at birth is something akin to an “unorganised machine” that through “training” over time transforms into a fully functional “universal machine” capable of processing large and complex pieces of data.

Turing, thereafter, proposed the “Turing test” to establish a criterion for whether or not an artificial computer is actually capable of thinking.

A series of bottlenecks, including prohibitively priced computers and a lack of funding, hindered the transition from theory to practical application, setting AI back at least five years.

But in 1956, a consortium of scientists presented The Logic Theorist—a program designed to mimic problem solving—at the Dartmouth Summer Research Project on Artificial Intelligence, where the term “Artificial Intelligence” was officially coined. This seminal event opened the floodgates into AI research for the next two decades.

In the 1970s, AI was the talk of the town. Computers were faster, cheaper and could store more information. Machine algorithms had also improved, and optimism was sky high. So much so, that computer scientist Marvin Minsky told Life Magazine that data scientists were “three to eight years” away from a machine with “the general intelligence of an average human being.”

Clearly, this was an oversight. It’s 2022 — and we’re still falling short of machines displaying purely sentient properties on par with human intelligence.

Not for the lack of trying. In 1997, IBM’s chess programme, Deep Blue, defeated reigning champion Gary Kasparov. In the same year, Dragon Systems announced a speech recognition software to be implemented on Windows PCs.

And today, machines are capable of collecting, curating and processing vast sums of data too cumbersome for the human mind.

Types of Artificial Intelligence

Artificial intelligence can be broadly categorised into four main types in ascending order of sophistication: reactive machines, limited memory, theory of mind and self-awareness.

Reactive Machines

This is the most basic type of AI, whereby a computer simply reacts directly to an external stimulus, without any internally generated conceptions. They are incapable of proactively interacting with the wider world, neither creating new memories nor drawing upon past experiences to inform present decisions.

Example: The Deep Blue chess-playing supercomputer.

Limited Memory

This second type can actually store existing data to make better predictions—and forms the basis of every machine-learning model, even the simplest types. Its architecture is a lot more complex than the reactive kind, but it’s far from spontaneous—specific objects and their movements are identified and tracked over time in order to create a response.

Example: Stage 2 autonomous vehicles with parking and lane-changing assist.

Theory of Mind

In psychology, Theory of Mind, or ToM, serves as the foundation for all social interactions, allowing us to predict and interpret other people’s behaviour to tailor our own responses. In the context of AI, this is bordering on sentience, and if we are to coexist harmoniously with our machine counterparts, the advancement of ToM research will be pivotal.

Example: Spider-Man’s AI Iron Suit, “Karen.”

Self-awareness

The final step in the AI roadmap is truly the stuff of science fiction. What we’re looking at is the harnessing of consciousness in an inanimate being. In other words, AIs having the ability to think, feel and react as autonomous entities, without the external influence of their makers. Developments are well under way, but we may need to wait a while for this one.

Example: Ava from the movie Ex Machina.

Real-life Applications of Artificial Intelligence

While we are still some ways away from Type IV AI, there’s no denying its growing influence in our daily lives. From healthcare to transport, here are a few real-world applications of Artificial Intelligence:

Finance and Insurance

In the push to remove human bias and error from high-risk processes, the financial sector has turned to Artificial Intelligence as a possible mitigation strategy:

  • Software that can decipher more data about a borrower than just a credit score and a background check is now able to make decisions on loans.
  • Digital “Robo-advisors” can now assemble bespoke investment portfolios in seconds, taking into account an investor’s needs and desired level of risk, all but putting brokers to the sword.

As for insurance, some digital startups like Lemonade are wielding AI to process claims within minutes rather than days or weeks.

This is down to highly developed Artificial Intelligence systems that can store multiple values in each storage location, dramatically increasing storage capacity and decreasing processing times.

National Security

In a situation like war, the ability to pre-empt, adjust and deploy strategies in real-time can be the difference between life and death.

The rise of big data, coupled with developments in machine learning, means huge amounts of information are sifted in near real time, providing army commanders and ground staff with an unprecedented level of intelligence—and allowing them to act fast to prevent casualties.

The next step in National Security intelligence is the introduction of autonomous weapons systems capable of self-deployment upon the detection of a threat, the genesis of the term “hyperwar.”

Example: The US Military’s AI Project Maven is designed to sift through troves of surveillance footage to identify suspicious activity before alerting analysts, thereby drastically increasing response times.

Healthcare

In 2020/21, the number of clinical medical negligence claims reported was over 12,000, representing a 10-year high.

As a response, Artificial Intelligence has been called for in the healthcare sector to deliver more accurate diagnoses, predict the development of malign conditions and deliver improved treatment plans in a timely manner.

Take, for instance, an application of deep learning in the detection of cancerous lymph nodes. Computers can now be trained to sift through data sets to distinguish between regular and irregular lymph nodes, which can help provide an accurate diagnosis.

Example: Researchers at Tulane University used AI to analyse tissue scans for cancer diagnoses. They gathered 13,000 images of colorectal cancer from over 8,000 subjects to develop a machine-learning model that eventually provided a more accurate diagnosis than human doctors.

Criminal Justice

Between 2018 and 2019, a Black person was 47 times more likely to be subjected to a “stop and search” under Section 60 than a white person in the UK, signifying gross racial biases in the criminal justice system.

Judicial experts have suggested that AI programs may reduce such bias in law enforcement, leading to fairer sentencing.

Through predictive risk analysis, machine learning can be used to assess the level of risk in certain criminal behaviour, allowing for a quick response and preventive action.

Example: We’re not quite at the level of Minority Report, but the city of Chicago has deployed an AI-driven “Strategic Subject List” to analyse previous arrests for risk of committing crimes in the future.

Through the curation of personal data, it’s able to rank more than 400,000 Chicagoans on a scale of 0 to 500 for metrics such as age, criminal activity, victimisation, drug arrest records and gang affiliation to accurately profile potential criminals.

Of course, this raises serious ethical questions about criminal profiling, no matter the degree of accuracy.

Transport

In 2021, there were more than 1,500 road deaths reported in the UK, raising questions about general road safety.

One possible solution to reduce human driving errors is the introduction of autonomous vehicles. Currently at the third of five proposed stages of autonomy, vehicles are now able to navigate roads with minimal intervention from a human driver.

Drivers may still be called to action when the AI cannot make a decision; however, we are well and truly on our way to fully driverless cars.

Example: Tesla’s Autopilot is a popular example of vehicle AI; however, Audi’s Traffic Jam Pilot represents the highest level of autonomy currently available. It can autonomously steer, accelerate and brake Audi vehicles at speeds of up to 37 miles an hour.

Discover how you can apply Tableau's AI technology to improve your business.

Create beautiful visualizations with your data.

Try Tableau for free

Graphic of visualizations

Advantages and Disadvantages of Artificial Intelligence

While the benefits of machine-enabled assistance are bountiful, the introduction of AI in the social imagination also paves the way for some less desirable characteristics. Let’s take a look at some of the advantages and disadvantages of Artificial Intelligence:

Advantage — Reducing Human Error

Most applications have turned to AI to reduce the margin of error in human activities. From medical negligence to careless driving, placing life-or-death functions in the hands of a machine may spare scores of casualties, increase efficiency and productivity and lead to a well-oiled society.

Advantage — Faster Decisions

Complementary to reducing human error is the split-second decision-making capability of Artificial Intelligence, allowing its benefactors to adjust, respond quickly and prevent large-scale calamities. Whether it’s malignant cancer detection or identifying a threat on the field of battle, AI can certainly make pivotal decisions in the blink of an eye.

Advantage — Fewer Risks

Activities such as defusing a bomb, venturing 20,000 leagues beneath the sea or deep-space exploration, are risky for humans to partake in. Handing these responsibilities over to an AI would enable the human race to make significant advancements in research — without the added risk of direct exposure.

Disadvantage — High Costs

Much of the hold-up from the ‘50s in the development of Artificial Intelligence was down to high costs. As the scope of data increases daily, the need for sustainable hardware becomes an added recurring expense. Add to that running costs of maintenance and repair, and the bill may reach prohibitive figures in no time.

Disadvantage — Unemployment

Possibly the biggest disadvantage of the growing stature of Artificial Intelligence is the number of ways in which it might make humans redundant. As AIs grow more advanced, they’ll be able to carry out more human functions with greater efficiency. In fact, the World Economic Forum estimates that AI will replace 85 million jobs by 2025.

Disadvantage — Laziness

While AI does improve efficiency and productivity, saving time on mundane processes, it appears as though people may be becoming increasingly reliant on it to get things done. With increased automation, less human effort is required, which may have wider repercussions such as poor performance in the workplace.

The Future of Artificial Intelligence

In 2018, renowned AI expert Stuart Russell played down the prospect of machines waging war against humans in a terminator-esque manoeuvre, stating “there are still major breakthroughs that have to happen before we reach anything that resembles human-level AI.”

Russell was quick to observe that present AI is not equipped to fully understand language, demarcating a telling difference between humans and machines. While we are able to interpret and translate machine language, the same can’t be said for our machine counterparts.

This presents a giant gulf in machine learning, one that requires significant investment, further research and constant development if it is to be realised someday. Until then, we are certainly safe from a full-scale machine invasion.

Ethics, Dangers, Opportunities and Risks

Much of Artificial Intelligence today is driven by the collection and manipulation of data to identify emerging patterns. This raises serious questions around data protection and the loss of privacy.

Although robot devices still aren’t playing an active part in data collection, except for in the space of cyber security, they will gain a stronger foothold in active surveillance with the emergence of technologies such as the Internet of Things and “Smart” systems.

If machines are suddenly able to autonomously gather private information, what sort of governance will we have in place to prevent the eventual descent into some kind of machine inferno?

List of Professions that May Be Replaced by AI

Here are some jobs that may be replaced by AIs in the distant—or not so distant—future:

  1. Customer service agents — if an AI Chatbot can answer your query instantly, why wait for an actual person, who may be having a bad day, to respond?
  2. Receptionists — you may already have noticed auto-check-ins at hotels and airports, which have significantly improved the speed of the process.
  3. Proofreaders — with apps like Grammarly already making waves in the literary space, soon our beloved proofreaders may need to find another job (one that hasn’t been replaced by an AI too).
  4. Couriers and delivery — cities like Milton Keynes are already using AI-powered smart bots to send food deliveries to customers. The question is, who gets the rider’s tip?
  5. Doctors — currently, robotic surgery allows doctors to carry out small procedures such as incisions with fewer errors. It’s only a matter of time until they replace doctors completely.
  6. Bus drivers — governments around the world are already planning for a driverless future, which means that chirpy bus drivers may be out of a job in the near future.
  7. Security guards — AIs like Yelp’s security robot are fitted with directional microphones, infrared sensors and high-definition cameras that can detect suspicious activity and alert authorities quickly. And they certainly won’t sleep on the job.

Artificial Intelligence is an unstoppable force, gradually taking over menial tasks as we move towards a seamlessly efficient future.

But before we even begin to conceive a future with sentient machines, we must first address AI’s present shortcomings, especially with respect to the ethical stipulations and moral repercussions concerning AI adoption, not to mention the gap in language recognition.

Until then, take a step back and watch as the age of data cascades into meaningful inputs that could be the difference between life and death.