Big news in the tech world! The 2024 Nobel Foundation handed out awards that shine a bright spotlight on the hidden story behind today’s AI breakthroughs. Two physics geniuses, John Hopfield and Geoffrey Hinton, earned their prizes for helping create learning with artificial neural networks. Meanwhile, chemistry honors went to Demis Hassabis and John Jumper for their incredible work on predicting protein structures – with some help from David Baker. Guess where Hassabis and Jumper were working? At Google DeepMind! Hinton, too, spent 10 years at Google before leaving in 2023. While their academic roots run deep, these corporate labs show where today's top AI magic truly happens. Why is this happening? It’s not just about ideas but also about power-packed machines. Cutting-edge AI models need huge computer clusters, carefully prepared data, and crack engineering teams. Google’s own development of tensor-processing units (TPUs) shows how big machines have become stars in AI labs, not just IT tools. Microsoft, on another side, fuels this race by funding and providing cloud supercomputers to OpenAI. Here’s the spicy question: If public money helped get some of this AI fire started, shouldn’t the public get something back? Early AI ideas, researcher fellowships, shared data, and publishing all got a public boost. But the cool stuff — running models on giant computers and who controls data — now belongs mainly to big tech. Back in the day, companies like Bell Labs and IBM shared their prize-winning research openly. Today, copying Jumper’s work might need fat budgets and tech skills no average lab has. This means a few big players control crucial AI paths and winning ideas. So, when public agencies put money into research or cloud space, they should demand openness: codes, model weights, and evaluation tools should come out into the open. Don’t get it wrong! Corporate labs can do great science. The real call is to break the private grip on AI’s secret sauce. Look at Google DeepMind’s AlphaFold 2: its code and predictions are open, helping many scientists worldwide work smarter. That’s possible because public bodies helped support those resources. But for giant AI models with billions of parameters? Big companies say safety means secrecy. This idea needs a remix. Instead of total lockdowns, there should be smart open plans — like sharing model weights step-by-step or clear safety rules separate from business plans. Another hot point is access to computers. If machine power blocks science, then computing should be treated like a public utility. Governments should set up “compute commons” so universities and small firms get fair access, as long as they share results openly and safely. The bottom line: The Nobel Prizes highlight where AI stars sit – in corporate labs with massive data and computing power. And those same tech giants, backed by public money, hold key resources. Public agencies can fix this by tying funding to clear openness rules. And instead of fighting “industry versus academia,” we should ask: Who decides research? Who owns AI tools? Who benefits? So, applause to researchers! But also, it’s time for rules that turn public funds into public goods: open code, data, model weights, benchmarks, and access to computing. Let’s make sure the next big AI award means real wins for all, not just for secretive corporations.