Featured image of post What are We Doing?

What are We Doing?

A discussion of hackathons, AI, and the death of creativity.

This year marks my third and final year organizing Bitcamp, one of two hackathons1 hosted at the University of Maryland and among the largest hackathons on the East Coast.

We saw a record attendance of hackers post-pandemic for Bitcamp’s 10th anniversary, with well over a thousand hackers in attendance. Curiously, the more surprising thing was how many more hackers stayed the night past the opening ceremony. In years past, the disappearance of hackers at night was a natural progression that has seemingly–as of 2025–come to a halt. In the wee hours past midnight, hundreds of hackers remained in the building while their beds begged their attention. And with 36 hours to build a project, you think many of them would acquiesce. Instead, the number of project submissions surged 40% from the previous year. Whether it be from our marketing team’s strenuous efforts or the desperation to build up one’s resume in a disastrous career market, our organizing team felt an unparalleled wave of excitement and ambition this year.

I noticed one other peculiar thing as well. I’ve seen thousands of students come and go from the hacker floor–the hubbubs of laughter, the suspicious-looking kids sneaking a few extra Redbulls from the snack table, and the sighs of exasperation over not being able to come up with an idea. These have been consistent and recurring phenomenons over the past three years, but one has surprisingly (or perhaps unsurprisingly) dwindled. Can you guess which one? Ah, yes, those sighs of exasperation have been ventilated through the roof with the help of our friend, Chat, as the youth call it today. Although, you may be more familiar with its full government name, ChatGPT.

These text-generating behemoths have seemingly changed the game. Instead of trying to brainstorm novel and feasible project ideas for hours, a quick query to our AI can give us hundreds of ideas within a few minutes. With just a bit of information about our background, Mr. Chat can give a tailored list of ideas that suit our interests or even strengths.

One little prompt here and there can help me set up my project, find that pesky bug, install that fancy library, and teach me how to scrub my body in what seems to be a giant sink(?). Instead of reading programming “guides” written by people who don’t speak English or strangely communicate via some Shakespearean dialect, I can get a peachy response from my AI friend with the patience of a saint. Rather than going down a rabbit hole of forums full of the most righteous self-grandizing passive-agressive jerks telling me that “the question has been asked before”, I can get a response to my liking from my understanding virtual assistant.

I mean, isn’t this great? The art of building technology has never been this easy or accessible. First-time hackers can leverage these AI tools to build something they would have never dreamed of instead of sitting like a duck after installing the wrong version of Python. Developers don’t have to spend 35 of those 36 hours trying to find a missing space on line 420 of f*ck.py. People can iterate on their ideas faster than ever, and there’s no doubt in my mind that the enhanced quality of projects over the years has been from advancements in these AI programs. I feel elated knowing some beginners left Bitcamp loving technology more having created something awesome that they wouldn’t have been able to otherwise. The future is clear and the future is AI. People will be creating and developing with AI on their side, enhancing a process once hindered by experience level, human error, and the speed of thought. People at Bitcamp or industry alike will integrate AI into their work, developing even faster and better as these tools improve.

But, to what end?

To What End?

A common argument floating around is that AI is simply a tool in your arsenal that will help you achieve bigger things. In the words of Mike Swift, CEO of Major League Hacking (the primary student hackathon league) who spoke at Bitcamp this year, AI gives hackers superpowers. Techno-optimists will claim AI is an extension of your arsenal similar to how calculators save needless computation time without completely destroying math. In the context of the benefits I mentioned above, this seems like a reasonable conclusion. However, this point of view is short-sighted.

Comparing AI to a calculator diminishes what an AI is capable of. This year, I had the privilege of walking around the hacking floor to see what people were creating. I was impressed, to say the least. I saw a game where you pretend to be a UMD officer shooting parking tickets at rogue cars, a push-up competition app that uses image processing to track your movements, and a program that would call to schedule a haircut or doctor’s appointment on your behalf. The teams brimmed with the excitement of having created something, which is what hackathons in general are about. After browsing around for a while, I stopped by this curious team with an amazing-looking landing page–one that I would expect from a full-fledged company. I was shown a short demo that blew my mind. The app was littered with professional features and design elements. The app was good, perhaps a little too good. My mind screamed to me that I should poke and prodd a little bit. I asked a few questions along the lines of: How’d you guys design this? Did you guys implement a backend? How did you build out this feature? Nothing. My straightforward questions were met with blank stares or redirection to a different teammate. I couldn’t get one cohesive or direct answer out of any of them. At that moment, I diagnosed the team based on their symptoms. This app was probably fully AI-generated.

Was this some unique case? Likely not. I’m sure that every single person in the building used AI at some point during the hacking period. But, a question or prompt here and there feels different than completely generating your project using an AI. There are tools out there that can generate an entire app in one go, and I’ve already heard stories of startups reaching hundreds of thousands in ARR2 from a vibe-coded (fully AI-prompted) initial product. However, something feels wrong about seeing this at a hackathon. Historically, success at a hackathon was a result of a team’s creativity, ingenuity, execution, or technical capability. A fully AI-generated app measures a small subset of these criteria or perhaps none at all. If many teams begin vibe-coding their way to the top, do hackathons simply become a prompting competition? Do technical capabilities no longer matter? Does Bitcamp become a competition to see which AI code-generation platform is the best?

These AI-generated apps take away from the spirit of a hackathon. The best way I can describe it is that the process of the final product matters. It doesn’t matter if the end result is a gem if it requires the massacre of a village to be found. This notion exists in almost all creative domains already. You can buy a Van Gogh replica at Target for $20. You’d rather watch Magnus Carlsen win a chess tournament than Stockfish beat other soulless computer engines. You’d rather watch a real Studio Ghibli film rather than some slop OpenAI can generate out of their corporate backroom. You’d rather buy a hand-crocheted hat than some factory-produced assemblage of yarn. All creative enterprises value the creation process and usually, the creation process is what gives the art value. The million-dollar Van Gogh is a reflection of his story, struggle, and artistry while the Target painting is barely worth the canvas and paint3.

To me, a hackathon is technology’s place within this creative enterprise. When evaluating projects, we value the same characteristics as art judges at an art exhibit. What wins an art competition is usually not the “best art”, which is already impossible to define universally. We look for more signals–inspiration, novelty, bigger story, etc. The prize categories help us narrow down what makes something the “best” for that one category, but the other criteria matter nonetheless. I won’t sit here and say hackathon judging is fair–with hundreds of submissions and dozens of judges on a busy hectic judging floor, information will be missed and judges will be unfair. But overall, a large semblance of artistic integrity persists.

Countering a Counterargument or Two

A friend from class mentioned that hackathons are meant to teach you skills that will help you in the real world. As companies move toward AI-driven development, hackers should follow suit to keep up with market demands. Consequently, hackers should be allowed to use AI tools as a means to that end.

I fully buy that the corporate (and even non-corporate) world will move toward AI-driven development and that people who don’t stay up to date with the latest tools will get left behind. However, I would argue with the notion that hackathons are simply funnels to the corporate world. There’s no doubt that many if not most people go to hackathons with some career goal in mind, and hackathons are some of the best places in the world to gain those real-world skills. But I always saw these skills as a sort of side-effect of building practical, useful, fun, or creative things. These skills translate miraculously to industry, where companies are essentially trying to do the same thing. But the spirit is where the two differ. Almost all companies operate with a bottom line–make as much money as possible. And with that end, you can meaningfully justify accelerating development with AI, which is going to do what some developer was going to do anyway. At a hackathon, you’re not trying to increase revenue or increase shareholder value–you’re trying to impress some judges. You’re trying to learn and be creative. The final product isn’t rewarded nearly as much as it is in the commercial landscape.

When we try to extend this argument to other domains, it also falls apart. In college admissions nowadays, most admission officers look for the telltale signs of AI generation. Once they are quite sure an applicant’s essays were AI-generated, do they reward them if the essay was really good? No! They toss out the application. The officers are trying to learn about someone’s character–to the extent that a great AI-written essay is worth less than some lousy piece that is true from the heart. In a speech or art competition, you would be furious if you came second to someone who AI generated the next “I have a dream” speech or asked ChatGPT to Studio Ghibli-ify his lousy crayon art. You wouldn’t recommend these competitors to simply use the tools at their disposal the same way as you wouldn’t recommend a hacker to contract off their project to his or her dad with 20 years at Google. When you have a means to an end, those means matter. But in this context, the ends matter, too. And a corporate end and a hackathon end are not the same thing.

What Should We Do?

This is the question I never like getting to because I truly don’t know. It’s not like we can halt AI development now, and maybe we don’t want to either. AIs are solving protein synthesis, curing diseases, and fostering unparalleled technological innovation. We haven’t drawn clear boundaries of what AI can and should do, and more importantly, what it should not do.

So, what about hackathons? Clearly, it would be unreasonable and infeasible to bar the use of AI. There are so many ways in which AI helps people see their visions come to life, and it would be unfair to take that resource away. But, at the same time, nobody wants to see a giant influx of fully AI-generated slop win all the prizes. How do we deal with projects where some snippets were AI-generated but others were not? How does a hacker’s experience level change how we evaluate how they use AI? A standard along the lines of “Do they violate the spirit of a hackathon” is too unreliable and non-measurable to become a guideline, but I also don’t really have any better ideas, either.

One could argue that this entire issue is well beyond the scope of hackathons. There’s no doubt that all creative pursuits are at risk since what was once a laborious and difficult task has been economized to how many quality tokens (i.e. good output) you can squeak out of a rack of hundred-thousand-dollar GPUs. How do art competitions sus out competitors who took inspiration from AI? How do speech competitions determine which speeches have “too much” AI? Practically, it would be impossible to outright ban AI in these enterprises, and may not even be reasonable to do so. It’s almost a guarantee that most contestants will interact with something AI at some point or another. It’s just a completely new landscape with potentially irreparable consequences. It may take a collective effort within the creative disciplines to conjure some sort of possible remedy–a feat far beyond my capabilities.

What’s Different This Time Around?

History is littered with breakthrough inventions that changed the world. From the light bulb to the transistor to the first vaccine to the World Wide Web, human ingenuity has flipped the world on its head hundreds of times over. However, AI is a new kind of beast we have never faced before. The comparison to a calculator not uprooting math can only make sense if you don’t think about it at all. AI can not only calculate some numbers but can solve the entire problem. It’s a hammer that can do the hammering without the person. What’s the use for you in that case?

Many people, techno-optimists and techno-pessimists alike, love to argue that AI still messes up occasionally on simple or complex problems and that humans are still much better overall. My question to them is, for how much longer? We’re only on year three of this new AI boom that will potentially last the rest of human existence. Did the Wright brothers’ 3rd plane design fly better than a falcon or eagle? No. Did anyone expect it to? Also no. Do they really expect that AI has reached some insurmountable ceiling in year three of development? Yeah, I don’t think so. It’s just a matter of time.

In the movie “I, Robot”, Will Smith asks a becoming-sentient robot, “Can a robot write a symphony? Can a robot turn a canvas into a beautiful masterpiece?” The robot simply responds with, “Can you?” We operate under the fantasy that we can do things better than AI, but it’s only a small subset of things that you are good at. All of the AI assistants are better at virtually everything else. As I said in my other blog3, an AI might draw a sixth finger every now and then, but can you draw five normal ones?

The difference is that we have finally developed an algorithm that lets a machine mimic human thought. It might not be real “thinking,” but can you tell the difference? If it can solve problems outside of its training data that you cannot, does that mean you cannot think, either? It’s a scary thought to know that machines have developed the capacity to reason and communicate in a way that is so human. Every job in existence relies on human thought in one way or another, but what if we don’t need humans to do the thinking anymore? What purpose do we have left?

All previous breakthroughs have been tools to be wielded and created by humans, but AIs can essentially operate independently to the extent that we rely on them to think on our behalf. The once strenuous activity of thinking has been remedied with a quick prompt to a technological beast powered by the Earth’s melting ice caps. And there’s no going back, either. We’ve come upon something so powerful, so game-changing, so suddenly that there really doesn’t seem like a way back.

Regulation

Per usual, regulation hasn’t caught up with AI at all. After all, it would be naive to think that our political overlords in Washington could somehow tackle this technological chasm in any kind of timely fashion. Legalized Ponzi schemes via crypto sh*tcoins are still running rampant with virtually zero comprehensive legislation around them, and web3–a technology much less existential than AI–has already been around for well over a decade.

To be more fair, it’s not like anyone has a great idea about what to do either. We’ve unlocked the early stage of some new reality that I don’t think most people even have the capacity to grasp. Techno-optimists envision an optimized future enhanced with AI tools but cannot understand the future extends past 10 years. Techno-pessimists would have you believe that AIs are just data regurgitators that will never surpass human capabilities, either from fearing they may be replaced one day or from a grave lack of understanding of what the technology is capable of.

I’ve seen the term AI safety thrown around where people research how to keep these systems safe, whatever that means. Part of that safety might mean interpreting how AIs “think” or putting guards around the output that they can produce. Another part might be an effort to keep AIs non-biased or politically neutral so they are “fair”4. There are things you can control output-wise with regard to these language models, although they obscure the problems with regard to the existence of AI itself.

Many of my hyper-intelligent friends are joining the AI space, likely because the field itself is super interesting and there are lots of important problems to solve (plus the money is good). When asking one of them what he thought about what AIs entail for the future, he gave me a response as short as it was understandable: “Oh yeah, we’re f*cked.” Make of that what you want.

As of now, the tech people will continue to build without thinking. They’ll navigate the intricacies of AI the same way they navigated the invention of the calculator, not realizing they are not at all the same thing. They can’t stop because their competitors won’t stop. They’ll make this AI and that AI until everything is AI. They won’t think because it’s too difficult to think. They can build and build toward some unreachable utopia because it’s easier than sitting down and figuring out what the consequences are.

Will we wait until the storm clouds gather at our doorstep before we learn to weave shelters from the rain? Will we even be able to weather the storm once it’s reached the shore?

We ought not to wait until the robots become Terminators, only to realize they rose from the ashes of today.

What the Future Holds

I have some predictions for the future. To be tonally consistent with the rest of this blog, they’re all negative.

Phasing Out Humans

We might become irrelevant soon. Given the complex reasoning of AIs today, it’s just hard for me to see that AI doesn’t surpass human intellect. After all, they have access to more data and information than we can ever hope to even see, connected to the interwebs in a way our brains cannot be. There are huge hallucinations and flaws that AIs suffer from today, but I doubt that these problems will persist for long; when they are gone, how can we compete?

What happens once AI surpasses all of us? Even if you don’t believe that can happen, what if it even surpasses 10% of us? What are the sub-AI humans supposed to do once they’re practically phased out of the market? Right now, AIs are limited in scope within the tangibles–they have limited access simply because the infrastructure hasn’t been built out (remember, we’re at year 3 of AI development). I’ve seen my peers essentially act as a medium of transmitting information from project to AI back to project. Are we just the spokespeople communicating a problem from the real world into ChatGPT? What happens when AI no longer needs us to be that spokesperson and acts on its own will? Do we really think we can prompt the AI better than it can prompt itself? What happens when AI can access your databases, your company’s custom tools, and your working environment? What will be the need for you then?

Arguably, the hands-on “blue-collar” jobs might be the safest for now. AIs do not have the physical capability to interact with the real world yet. Our robots are advancing fast, but are nowhere near sophisticated enough to interact with reality the same way humans are conditioned to. Jobs in a purely intellectual or virtual medium like software engineering or journalism are at stake. The day people build out the infrastructure for AIs to interact with entire systems is the day that our white-collar world starts to collapse. Hold onto your seats as long as you can.

Rising Need for Intelligence

Automation advocates routinely reassure us that for every job consumed by technology, new opportunities will bloom elsewhere in the ecosystem. A burger-flipping robot might replace burger flippers, but these burger-flipping robots will generate new factory jobs, robot maintenance jobs, robot designer jobs, etc. People will just be rerouted in pursuit of burger-flipping efficiency. But, you may notice that it’s a lot harder to do electrical work on a robot than it is to flip a burger. Historically, every automated job has been replaced by a sea of jobs that are more complex. As we advance technologically, we require more and more people to use these new, more complicated…technologies. At some point, we’re phasing out people who are incapable of taking on complex occupations.

Many people will argue for the endless capacity of all humans, but I’m here to dampen your naive optimism. The existence of unintelligent and intelligent people means there’s a spectrum of capabilities in the world. As sad as it is to say, some people are less capable of navigating complex environments, to no fault of their own. Imagine if AI advances so far that the market solely demands people capable of Ph.D.-level math. Virtually everyone will become useless because most of us are not capable of doing Ph.D.-level math. But, even AI is different from a “more advanced technology,” as it is capable of fully solving problems from start to end. Once AI evolves past the average human intelligence, most people will cease to have any intellectual competitive edge. The landscape will be stratified into those who can and those who can’t. And we really don’t know what severe consequences that will have.

AI Brain Chips

Another idea is that we’ll eventually all be cyborgs with the latest and greatest AI directly planted into our brains. We’ll be soulless and brainless cockroaches that can only regurgitate what our AI overlords tell us, surrounded by AR-targeted advertisements that we are required to see because we can only afford the cheapest subscription of BrainGPT. Although we are sad that we lost half of our AI overlord-granted monthly stipend on our subscription, we would hypothetically be more sad about the soul we lost if we somehow still had the capacity to think.

Do we want to live in a world where we’re just some internet-connected half-robotic abomination? Will we be using the technology or is the technology simply using our bodies as a skin suit? I cannot fathom that somehow we’d still be the ones in control of ourselves, and even if we were, I don’t think it would last for long. I don’t crave some futuristic reality where we are simply a mouthpiece for some AI living rent-free in our head, where some computer algorithm flirts on our behalf, does job interviews on our behalf, or FaceTimes Mom on our behalf (if any of those things still exist). Isn’t part of the human experience that we experience things as humans?

A Whole New World

In a world where AI does all of the intellectual work, what is there left to do? Will we sit around and twiddle our thumbs? Will the government, run by AI or maybe not run by AI, give us monthly payments after we have lost all our usefulness?

Was the end goal of any economic system to create a perfectly efficient economic system with no more human input? Does it make sense to have an economic system if humans are no longer part of it? Are we all reduced to nothing when thought is no longer unique to us? Will humans be satisfied knowing they have no practical utility outside the mundane drone of existence?

What’s the point of it all? What do people want AI to be? What do people want…people to be?

I cannot fathom that a future with AI can rest on the systems that governed our past. Perhaps probing AI to its utopian ending is a silly thought given that we will destroy ourselves before it could ever happen.

Al Fin

We somehow started from a simple discussion of judging hackathons and ended with an existential discussion of what it means to be human. I appreciate you sticking along for the ride. By now, I’m sure you’re as sick of seeing the two letters “AI” as I am writing them. I really don’t think we’ve grasped the magnitude of what we’ve created, and I don’t think we’re prepared to face the consequences. The most popular takes on AI I see as a student in tech are so simplistic I cannot help but think that most people have not thought this through at all. I have not found much engagement around the serious implications around AI that I have mentioned today, which worries me about what the future might hold.

I hope in writing this that people begin to understand that these language models are wolves in sheep’s clothing. A lot is lurking beneath the surface, and we must tread carefully. The waves are beginning to hammer the shore, and I pray we will be ready when the tides come crashing in.


  1. A hackathon is an event where small teams (usually up to 4-5) collaborate on a technological or engineering project (e.g. website, game, robot, mobile app) for 24-48 hours. Teams submit their projects to one or more judging categories (Best Social Good, Most Creative), hoping to bring home a prize. ↩︎

  2. Annual Recurring Revenue, i.e. projecting revenue out to a year based on a smaller time frame. $100k month = 1.2 million ARR. ↩︎

  3. I touch on the topic of human creativity a bit more on my short-form blog: https://echen.io/shorts/human-creativity/↩︎ ↩︎

  4. It should be impossible for anything to be neutral or unbiased. Any statement made at all has a value system, often controlled by whoever creates the AI. However, I guess it’s slightly better to have ChatGPT instead of StalinGPT or HitlerGPT. ↩︎