In 2024, the Nobel Foundation made a big splash by awarding top prizes to scientists who made huge advances in artificial intelligence (AI) and machine learning (ML). John Hopfield and Geoffrey Hinton won the physics prize for their work on artificial neural networks. Demis Hassabis and John Jumper took the chemistry prize for predicting protein structures, a breakthrough partly done at Google DeepMind. Hinton used to work at Google too until 2023. These awards show where the magic in science happens now: big corporate labs backed by tech giants like Google and Microsoft. But it’s not just fancy ideas that matter. These breakthroughs need massive computing power, huge data sets, and expert teams. For example, Google built special chips called Tensor Processing Units (TPUs) just for AI research. Microsoft helped OpenAI with cloud supercomputers. So, research depends heavily on private infrastructure. Here’s the spicy part: much early AI research got public money support—from universities, government grants, datasets, and more. Yet when these ideas turn into real products, private companies keep tight control over the technology, making it hard for others to use it freely. Think about it: decades ago, companies like Bell Labs and IBM did great science, but they shared their findings openly so other scientists could copy and improve the ideas. Now, trying to recreate something like Jumper’s AI protein model needs huge money and special skills. This means only a few big companies can really do this work. Experts say this should change. If public money helped fund the research, the results—like computer codes, test data, and trained AI models—should go back to the public. Governments should demand openness when giving grants or buying cloud computing. The goal is to stop private firms from locking away ideas that started with public support. Take Google DeepMind’s AlphaFold 2. It’s an AI that predicts protein shapes and the team shared the code and predictions publicly. This helped many scientists worldwide, proving that sharing really works. But when it comes to super-powerful AI models with trillions of parameters, companies often keep things secret, calling it "responsible release". Critics say it’s just an excuse to stay closed. What if computing power was treated like a public utility? Countries could create public pools of computer resources, offering them affordably to researchers and startups. This way, scientists wouldn't need corporate permission to explore and build on AI advances. The 2024 Nobels reflect these big changes. The winners worked in big tech labs because that’s where the data, compute, and teams are. Microsoft’s support for training big AI models explains how such projects get done. It’s a story of powerful resources gathered in few hands. Public agencies must act boldly: link funding to open sharing rules, demand clear reports on costs and data use, and back platforms that everyone can use safely. For private companies, trust should come from real help to the wider AI community, not just profits. Finally, it’s time to stop thinking this is just "industry vs academia." The real questions are: Who decides research goals? Who controls the tools and data? Who can copy and build on the work? And who benefits from the amazing AI models? Celebrating Nobel winners is great, but we must also ensure that public money leads to public gain through open codes, data, and computers. The path forward is clear: reunite the power of public knowledge with private infrastructure through smart policies and contracts. Then future Nobel winners’ genius will truly belong to all of us—and that’s the real prize!