Humans are the most intelligent species found on Earth. The ability to learn, understand, imagine are the qualities that are naturally found in Humans. Developing a system that has the same or better level of these qualities artificially is termed as Artificial Intelligence. Before talking about AI hype, let’s go through brief history of AI.
Modern AI got its start when people came up with the term “Artificial Intelligence” in 1956, during a conference at Dartmouth College in Hanover. The government provided more money and got more interested, but they expected too much, which caused a period known as AI Winter from 1974 to 1980. The British government began funding AI again to compete with Japan, but they couldn’t prevent the second AI Winter from happening between 1987 and 1993.
In the last ten years, the field has made significant progress. The 2010s were like a booming AI era, with big tech companies such as Google, Facebook, and Microsoft frequently showcasing the impressive capabilities of AI.
After the development of more powerful processors like GPUs and TPUs in the 2010s, Deep Learning took major leaps forward. Neural networks started producing superior outcomes compared to other algorithms when applied to the same data. Notable accomplishments resulting from Deep Learning include Alexa and Siri understanding spoken language, Spotify and Netflix suggesting content, and Facebook automatically tagging friends in photos.
As technology progressed with faster speeds and increased memory capacity, Quantum Computing entered the scene, further elevating AI performance. Tasks that used to take months to simulate are now completed in days, hours, or even less.
Albert Einstein once famously said, “The measure of intelligence is the ability to change.” The journey of AI, which commenced in the 1950s, is driven by the goal of creating machines capable of emulating human behavior with remarkable skill. However, a pertinent query arises: How does AI confront challenges that surpass the realm of human imagination?
Despite the notable achievements in the field of AI, there exists a disparity between the high expectations and a genuine comprehension of its inner workings. Often, tech leaders stir excitement by proclaiming that AI will replace humans. Yet, the current reality is that we lack the capability to design an AI-controlled robot adept at a seemingly simple task like peeling an orange.
In recent times, there has been a surge of fervor surrounding AI. However, the reality falls short of the soaring public anticipation. Notably, the domains of self-driving cars and healthcare have garnered significant attention within the realm of artificial intelligence. These areas, while brimming with potential, also pose the greatest challenges, leading to timelines extending far beyond initial optimistic projections. AI is still in its nascent stages, and while extraordinary advancements are expected in the coming years, it’s important to note that reality might not match the rapid progress often portrayed in movies.
Companies in the business sector are often influencing or captivating customers by leveraging AI as a marketing buzzword. These companies integrate minor AI projects into their products for commercial purposes. In today’s market, AI is being affixed to consumer goods in a somewhat contrived manner, such as AI fans, AI refrigerators, and even AI-enabled toilets. The image below illustrates the prevalence of “Artificial Intelligence” and its related terms in marketing.
CHANGE YOUR MIND !!
AI is capturing the interest of students and researchers more than ever. However, many individuals fail to recognize that becoming an expert isn’t as simple as using tools like PyTorch or TensorFlow. Employing pre-built models with frameworks like Keras, TensorFlow, and Scikit-learn lacks uniqueness and distinctiveness, especially when following online tutorials and merely swapping out the provided data for our own. The real challenge lies in comprehending and mastering the underlying concepts of these architectures. This demands a deep understanding of subjects like probability, statistics, information theory, and calculus to truly grasp the core concepts.
AI components consist of mathematically validated techniques or implementations coded to operate on data. The applicability of AI hinges on the specific problem they are tailored to address. If machine learning isn’t yielding desired results, it’s often due to incorrect usage rather than inherent flaws in the methods themselves.
Jobs that involve repetitive tasks and don’t require human decision-making are susceptible to replacement by AI. However, this shift won’t obliterate job opportunities; rather, it will usher in a new array of skill-based roles. Consider the example of self-driving cars: Industry players like General Motors, Google’s Waymo, Toyota, Honda, and Tesla are all working on autonomous vehicles. While driving jobs might diminish, new avenues will open up for roles centered around creating and maintaining these advanced systems. Humanity is poised to evolve towards greater intellectual pursuits, and the present moment is opportune for such a transition.
At present, we commonly refer to AI as encompassing Big Data and Machine Learning. Our primary focus lies in uncovering patterns and making predictions about upcoming trends. However, AI encounters significant challenges when it comes to providing explanations for its decisions. In critical domains such as finance, healthcare, and energy, where even a minute accuracy deviation can have substantial consequences, AI’s reliability remains questionable.
Taking all factors into account, it’s becoming evident that the initial AI excitement will gradually subside as people recognize that what we often label as “AI” is essentially a form of “pseudo-AI,” where human intervention is still a crucial element behind the scenes. Despite the presence of apprehension, skepticism, and misconceptions around AI, the enthusiasm and buzz surrounding it don’t seem likely to fade away in the immediate future.