Nvidia's Shocking Rise Exposes the AI Truth We All Missed

 

Today, while listening to a business podcast on Nvidia and reflecting on its recent Q4 2023 earnings beat, an intriguing thought struck me. The analyst admitted he was wrong about Nvidia, sparking my realization about the broader perception of AI. Here's my take: Nvidia's remarkable economic growth from October 2022 to Q4 2023 isn't just a financial success story; it's a mirror reflecting our collective underestimation of AI's capabilities and advancements. This gap between perception and reality, as it narrows, is not only reshaping our understanding but also redirecting financial resources towards AI's growth, marking a pivotal moment in technology awareness.

The first thing I want to address is, why am I even talking about this? Historically I tell people I’m a network engineer, but in recent times I’ve considered that I’m more of an IT infrastructure engineer. I work with VMware infrastructure, networking infrastructure, and automation with Python due to the scale of infrastructure being so large that I can’t possibly touch every box. That said, my day-to-day work is not in AI. I’m not a data scientist, I’m not a machine learning engineer, and I’d never call myself a software developer or “prompt engineer”. In fact, I think the role of prompt engineer is laughable and not a real AI job. I think understanding how to talk to an AI will be a skill for every job in the future.

I’m opining on AI because I love the topic of AI and now view it as a mainstream topic. I recall when my AI fascination began. When I was 15 my favorite video game of all time was released, Perfect Dark in 2000. The most compelling feature in the game for me was suite of sophisticated bots that you could play against, or team up with. Back then, high speed Internet wasn’t everywhere. If you wanted an opponent it was either bots or LAN party. The bots dynamically navigated all of the maps, prioritized goals based on the game state, made decisions on what was the best weapon to acquire and use, and made decisions about which opponent should be targeted when given multiple options. Aside from being entertained with the game play, I spent many hours wondering how it worked. As is often the case for boys and video games, my appetite for the game was pathologized. If someone had recognized the nuance of my interest in the game and told me I could learn how to make such things by pursing computer science, I think my life path would have been much different. But, this isn’t a gripe session, I’m only pointing out that my interest in AI, at a minimum, goes back to the year 2000, when I was 15.

I have a passion for the topic and follow it about as closely as a laymen can follow. In truth, I’d say I’m more than a laymen since I received computer science training in my university course work (Thanks G.I. Bill). I remember being in intermediate programming where the first 4 classes were lecture, readings and chalk board scribblings. At that time, it was odd to me that computer science could be taught without computers. As students, we waited eagerly to discover what computer the professor was using. Was it a Windows PC, Mac, or Linux? At that time, the professor was Will Trobaugh at University of Colorado Denver and he used a Mac. I still recall the hours of writing my code and program logic (homework) with pencil and paper. I think it seems odd for a field often associated with being high-tech, but I’ve not met anyone who can hold the contents of a graph theory problem in their head. Sometimes you just need to write stuff down.

Many pundits in the business world have been completely surprised by the rise of Nvidia. I still know quite a few people who’ve never used chatGPT or the like. Over the last year of conversations I’ve had on the topic, I often get this responses like “I used it and it can’t do X”, or, “I tried X and chatGPT completely failed!”. While I acknowledged LLMs have limitations, these comments often come from people with a limited imagination. If chatGPT fails to do some task you asked it to do, we live in a time where a realistic strategy is to tell your computer to take a deep breath and try again. For example, telling chatGPT to “take a deep breath” while asking it math problems yields something like 80% more correct answers than if you don’t tell it that (https://arstechnica.com/information-technology/2023/09/telling-ai-model-to-take-a-deep-breath-causes-math-scores-to-soar-in-study/). I think this example represents one way that people misunderstand the capability of LLMs.

Modern AI LLMs are to traditional AI what the automobile is to the horse and carriage. In the days where the automobile was new, people couldn’t comprehend an automobile, so automobiles were explained to people as horseless carriages. I think we’re at a similar moment in history with AI. I claim that the stock market serves as a measuring stick for how unaware the general population are with regard to AI capabilities. As that gap of actual AI capability and the perception of AI capability closes, AI companies, Nvidia being most foundational, will capture that lack of awareness as revenue. The people who had the conviction to see this revolution coming are in a strong position to benefit financially. Through this lens, its not difficult to understand why Sam Altman is trying to raise $7 trillion dollars.

What's your take on the future of AI, and how do you see it impacting your field or daily life?

-Daryl



Comments

Popular posts from this blog

VXLAN versus GENEVE (NSX-V vs. NSX-T)

"Twice NAT" with NSX-T T0 Gateway

Packet Capture Network Traffic Inside ESXi Hypervisor