My page - topic 1, topic 2, topic 3 Postbox Live

Artificial intelligence (AI): what is it?

Artificial Intelligence (ai) What Is It

Artificial intelligence (AI): what is it?

Applications, architecture, and prospects for it

The ability of a digital machine to carry out cognitive-like tasks usually associated with intelligent beings is known as artificial intelligence, or AI. This entails engaging with their surroundings, figuring out issues, determining the truth, making predictions, making recommendations, and carrying out intricate computations.

The majority of sophisticated AI models available today are self-learning, which allows them to continuously improve through feedback loops. Others that are still in the early stages of development can produce text, audio, video, code, and even artistic and musical creations that were previously believed to be exclusively human. Generative AI is the term for this (genAI).


Data, which AI feeds off of, makes all of this possible. AI models automate processes, extract insights, make predictions, and more by combining massive databases, processing algorithms, and quick computing power.

AI Types


AI falls into four broad groups based on its present functionalities:


Artificial intelligence that is:


Self-aware;
• Theory of mind;
• Limited memory;
The first generation of artificial intelligence is reactive AI, often known as limited AI or weak AI.
These models don’t learn over time, but they can accomplish a single task suitably often better than humans. This technology is “useful for testing hypotheses about minds, but would not actually be minds,” as American philosopher John Searle put it.

restricted memory Artificial Intelligence has the ability to learn from the past and apply those lessons to its algorithms in order to continuously develop. These models perform classification, forecasting, and other tasks using historical, preprogrammed, and observational data. The quintessential example of this is machine learning (ML), and more specifically, deep learning.


We still don’t have self-aware AI or theory-of-mind AI. If machines have theory of mind, they might make decisions in a manner akin to that of humans and could modify their actions to interact with people and react to novel situations.
This would be furthered by self-aware AI to attain true sentience and consciousness.

These types of theoretical AIs are often referred to as artificial general intelligence (AGI) and artificial super intelligence (ASI) (or “super AI”), the latter being when machines could potentially surpass human intelligence.


Such highly advanced types of machines remain in the domain of sci-fi for now. But they’re not as far-fetched as they used to be with the rapid acceleration of generative AI (genAI), deep learning capabilitAI and ML are closely intertwined
ML is critical to AI development and the two are closely intertwined.
ML is the process of using mathematical models to help computers learn without direct instruction. ML algorithms can detect patterns, anticipate what will happen next and provide recommendations. ies and other advancements.

As described by Microsoft, an intelligent computer “uses AI to think like a human and perform tasks on its own. Machine learning is how a computer system develops its intelligence.”


AI models are built using ML and other techniques. ML models are created by studying data patterns, then data scientists optimize models based on those patterns. This process repeats itself and is continually fine-tuned until a model’s accuracy is high enough to achieve intended tasks.

ML programs can have supervised, semi-supervised or unsupervised methods of learning.

 

• Supervised learning: Programs are given datasets that have already been classified and the program learns associations between them.
• Semi-supervised learning: Programs must fill in gaps of information based on contextual information.

• Unsupervised learning: Programs must make correlations between datasets that have no contextual information. They are not given explicit instructions on what to do with data.

Some models incorporate human in the loop (HITL), in which humans work alongside models, analyzing their inputs and outputs and making corrections and adjustments as needed. Models then use that feedback to continually optimize.


Deep learning


Deep learning is a type of ML based on neural networks that are composed of many layers and mimic the way neurons interact in the human brain. This technique can process a wider range of data sources and can often provide more accurate results than traditional ML.

Data is ingested and processed through multiple iterations and the model learns increasingly complex data features. The network can make determinations about data and learn whether that deduction is correct, then apply those insights to make decisions about new data. Computer scientists often describe this as scalable ML.


Types of deep learning techniques included the following:

• Feed-forward neural networks: Information moves in one direction without looking backward for re-analysis. Data is fed into the model and computer scientists can then train it to make predictions about different datasets.
• Convolutional neural networks (CNNs): This feed-forward neural network is best suited for perceptual tasks. CNNs receive images and process them as pixels, then identify unique features so they can later classify different images.
• Recurrent neural networks (RNNs): These move data forward and loop it around through previous layers to help make predictions.

Natural language processing (NLP), conversational AI


NLP is another branch of AI that helps computers understand text and spoken words that are written or spoken in ordinary language, rather than formal computer code. NLP combines computational linguistics (modeling of human language) with ML and deep learning to help computers perform translations, respond to spoken commands and quickly summarize large volumes of text (sometimes in real time).

NLP can be leveraged for tasks including sentiment analysis, text summarization, email filtering, document analysis, predictive text and online search. Examples of the technology in use include voice-operated GPS systems, chatbots, smart assistants and speech-to-text dictation engines.


A subset of NLP is NLP understanding (NLU), which uses syntax (grammatical structure) and semantic analysis to determine intended meaning in a given sentence.
Another subset, NLP generation, produces a text response through the use of RNNs and transformers. Similarly, conversational AI can simulate human conversation.

Generative AI


Generative AI (genAI) is a rapidly evolving type of AI that can create text, images, audio, video, music, code, 3D models  even art.

While researchers have used the technology for more than a decade, genAI models didn’t gain widespread popularity and adoption until OpenAI released ChatGPT (Generative Pre-trained Transformer) on November 30, 2022. The large language model (LLM) reached 1 million users in just five days (setting a record).
Other examples of LLM include GPT-4 (also from OpenAI); Google’s LaMDA, PaLM and BERT; and Hugging Face’s Bloom. Similarly, DALL-E uses GPT-4 and deep learning models to create images.

LLM are fed massive datasets in some cases the entirety of the internet and the written word. These models incorporate deep neural networks and transformer models that track relationships in sequential data to learn context and meaning. They are trained through ML approaches including unsupervised, semi-supervised and supervised learning.


LLM can be used for a variety of applications: powering chatbots and AI assistants, providing more direct responses to search queries, writing software, segmenting products for marketing purposes and identifying fraud (among many other use cases).

Diffusion models are another key foundation of genAI. These probabilistic generative models progressively destruct data by injecting noise, then learn to reverse this process for sample generation. Examples of this technique include the text-to-image model Stable Diffusion and art generator Midjourney.


The many use cases of genAI are still being identified and are continuing to
evolve by the day, but the technology is already being used in product and app development, blog and other content writing, marketing communication workflows, graphic design and business performance reporting and management.

AI applications


AI is increasingly being leveraged across all industries  telecommunications, marketing, financial services, manufacturing, health care, R&D, entertainment … you name it  and its use cases are vast and seemingly endless.


Some of its most prevalent AI use cases today:

Data analysis, including large amounts of data that is collected but otherwise ignored (what’s known as dark data).
• Predictive analytics and DataOps to identify trends and patterns.
• Recommendation engines to target relevant products and services to customers/consumers.
• Sentiment analysis to identify and categorize customer feedback.
• Customer support (chatbots and aggregation of previous customer interactions).
• Security and risk management.
• Network optimization in telecommunications.
• Application performance management (APM).
• Marketing management.
• Supply-chain planning, management and maintenance.


History of AI


The idea of sentient machines has been around in literature and film for decades  philosophers even pondered the idea thousands of years ago  but serious conversation around it truly began with Alan Turing’s seminal 1950 work, “Computing Machinery and Intelligence.” The since-titled “father of computer science” posed the question: “Can machines think?” He also put forth the now well-known Turing Test, which asks participants to differentiate between a computer and a human text response.


The term “artificial intelligence” didn’t emerge until 1955, when it was coined by John McCarthy. The Stanford emeritus professor described it as “the science and engineering of making intelligent machines, especially intelligent computer programs.


More than 50 years later, McCarthy further defined intelligence itself as “the computational part of the ability to achieve goals in the world. Varying kinds and degrees of intelligence occur in people, many animals and some machines.”
Also in 1955, researcher Allen Newell, economist Herbert Simon and programmer Cliff Shaw co-authored the first AI computer program, called Logic Theorist. Then in 1959, IBM’s Arthur Samuel coined the term “machine learning” (ML) when discussing programming a computer that could play chess better than the human who developed it.


Therein ensued what some considered (at least until now) AI’s Golden Era. Computers were becoming smaller, faster and less expensive and ML algorithms were improving. In fact, between 1956 and 1974, the U.S. Defense Advanced Research Projects Agency (DARPA) funded AI research.


Experiments included language translation and replication of brain neurons in neural networks. In 1970, computer scientist Marvin Minsky went so far as to proclaim: “from three to eight years [from now] we will have a machine with the general intelligence of an average human being.”


But in the mid-1970s, interest dwindled amid criticism and a lack of computational power to perform viable experiments. This is referred to as the first “AI winter.”


Things picked back up again in the early 1980s with expanded algorithms and funding. Deep learning techniques proliferated, as did expert systems leveraging “if-then” rule-based reasoning. The Japanese government, for its part, invested millions to help revolutionize computer processing.


But aggressive, overly lofty goals were not reached and funding and interest once again ran dry, leading to a second AI winter that lasted from the late-80s to the mid-90s.


Still, determined researchers continued their work, and by the mid-1990s, many critical AI benchmarks had been reached. A huge step forward came in 1997 when IBM’s Deep Blue program defeated GrandMaster chess player Gary Kasparov.
Although there have been lulls here and there in the 20-plus years since, AI capabilities have continued to accelerate and the industry has been in an all-out sprint since the release of ChatGPT in November 2022. We are currently in the most prolific, active phase of AI in history, with no signs of slowing down.


Concerns around security, responsibility


While many laud its implications and benefits, there are also numerous concerns around AI.


AI bias


Some AI models that have been used to screen job candidates and sift through financial loan requests have been found to be biased against certain groups, including women and people of color. This is because models are trained by humans  and humans, whether consciously or unconsciously, are inherently biased in one way or another. This has prompted regulators to call for responsible AI (also known as ethical AI). The EU’s proposed AI Act would identify AI systems in different risk categories and either limit them or outright ban them. Meanwhile, the Biden administration has released a Blueprint for an AI Bill of Rights.


AI security


Generative AI is increasingly being leveraged by attackers to create deepfakes, carry out social engineering attacks, identify vulnerabilities in APIs, produce fake documents, guess passwords and sabotage ML embedded in security tools. Also, some enterprises that have adopted genAI tools have discovered that their employees are using them incorrectly and leaking personally identifiable information (PII) and other sensitive data (as was the case with Samsung, which has since banned the use of ChatGPT and other genAI tools).


AI replacing human workers


Particularly in the case of genAI, which can create content in seconds, many in creative fields have sounded alarms, and some have already lost their jobs to tools including ChatGPT. The other side of this argument, though, is that AI will augment workers by helping them do their work more efficiently.


Getting close to AGI or super AI, too quickly


There are widespread societal fears that AI is moving too quickly and could soon become out of control. While likely stoked by films such as 2001: A Space Odyssey, The Terminator and the like, top leaders and luminaries in the field  including Elon Musk, one of the “godfathers of AI” Yoshua Bengio, and pioneer of AI research Stuart Russell  have called for a pause on AI, citing “risks to society.”


What is artificial intelligence: Key takeaways


1. AI models combine large datasets with processing algorithms and fast computing power to automate tasks, derive insights, make predictions and more.


2. ML is the process of using mathematical models to help computers learn without direct instruction; ML algorithms can detect patterns, anticipate what will happen next and provide recommendations.


3. AI is increasingly being leveraged across all industries  telecommunications, marketing, financial services, manufacturing, health care, R&D and entertainment  for use cases including data analysis, predictive analytics and customer support, among many others.


4. The first AI computer program came out in 1955. In the decades since then, development has waxed and waned until the release of ChatGPT in 2022 kicked off a new, highly active phase.


5. There are several concerns with the new high-speed development of genAI, including fears of bias, issues with security and worries that AI will replace human workers.

 

 

 


Discover more from Postbox Live

Subscribe to get the latest posts sent to your email.

Leave a Reply

error: Content is protected !!

Discover more from Postbox Live

Subscribe now to keep reading and get access to the full archive.

Continue reading