Ship a developer tool that saves time, reduces mistakes, or improves observability; something you’d genuinely want in your own workflow.
1st place: $2,000
2nd place: $1,000
3rd place: $500
Build a solution that improves healthcare experiences by reducing admin burden, improving coordination, or helping people navigate care without requiring medical diagnosis.
1st place: $2,000
2nd place: $1,000
3rd place: $500
Create a tool that improves learning outcomes by making education more personalized, accessible, and engaging for students, educators, or self-learners.
1st place: $2,000
2nd place: $1,000
3rd place: $500
Build a product that meaningfully improves a real-world workflow for students, creators, or communities in under 24 hours. Prioritize usefulness, clarity, and a tight demo over breadth.
1st place: $2,000
2nd place: $1,000
3rd place: $500
The Token Company (YC W26) is the first commercial lab building proprietary machine learning models for compressing LLM input by removing the least significant tokens.
The technology enables companies to fit more context into LLMs, save on input token costs, and improve model performance without affecting output.
As compute costs continue to scale, the AI industry is hitting a wall where high-level inference becomes an expensive luxury. This creates a growing gap in who can afford to build and use AI.
We believe the path forward isn’t just more hardware, but radical efficiency through compression. Historically, every major medium—from JPEGs for images to MP3s for audio—had to be compressed to become scalable. AI input will follow the same path.
By distilling prompts down to their most significant tokens, we bypass the hardware bottleneck and make massive context windows scalable.
The Token Company is currently in stealth and closed a $2.2M pre-seed at a $15M valuation in January 2026 from Y Combinator, SV Angels, Inception Fund, Visionaries Club, and founders behind Hugging Face, ZFellows, Supercell, and AMD Silo AI.
Try the demo at thetokencompany.com.
We are taking submissions for two challenge categories:
The Token Company’s classification model bear-1 offers up to 60% token savings while improving accuracy by +1.1% on the LongBench V2 benchmark.
Submissions should fall into one of the two tracks:
Exceptional teams may also be considered for internships or recruitment opportunities.
The Token Company team will be available for questions during the event.
The company is actively hiring top ML talent interested in researching and building novel compression technologies.
If you’re interested in working with the team in San Francisco, reach out atteam@thetokencompany.com.
Polymarket is the world's largest prediction market where traders predict the outcome of future events across politics, current events, pop culture, and more, winning when they're right. As traders react to breaking news in real-time, market prices become the most accurate gauge of event likelihood, which institutions, individuals, and the media rely on to report news and better understand the future. With billions of dollars in predictions made in 2025 and exclusive partnerships with the Wall Street Journal, UFC, Golden Globes, and New York Rangers, Polymarket has established itself as the definitive platform for real-time forecasting and market-driven insights.
Prediction markets are powerful financial derivatives, especially for hedging exposure and constructing sophisticated strategies, but most users still interact with them using relatively simple interfaces. In traditional finance, traders rely on sophisticated tooling like profit & loss curves, scenario modeling, time-based payoff visualizations, and portfolio hedging views to deeply understand risk and opportunity before placing a trade. These tools are largely missing in prediction markets today.
Your challenge is to design and build advanced trading tools for Polymarket that help users better understand, visualize, and manage risk across time, price, and probability.
Participants should build applications that leverage Polymarket markets and data to create TradFi-style trading experiences, such as:
The goal is to unlock more sophisticated trading behavior by making prediction markets easier to reason about, experiment with, and trust, especially for users coming from traditional trading or crypto-native derivatives.
Submissions can come in various forms, such as but not limited to web apps, dashboards, visual simulators, or analytical tools.
We will prioritize projects that demonstrate:
All integration examples include data fetching, placing order and other misc activities using Polymarket APIs
Here are some example ideas to help kickstart the building process.
There are many unlockable ideas in this space; don’t feel limited to the examples above. We encourage creative approaches that help users better understand risk, probability, and payoff before placing a trade.
LiveKit is an open-source framework for building real-time voice agents that people can speak to naturally. It solves the hardest parts of voice agents—low-latency audio and real-time orchestration—in a unified system.
Built for developers and teams, LiveKit makes it easy to compose and control all the moving pieces required for a responsive, conversational agent. Hackers can bring their own models, tools, and logic, experiment freely, and focus on creating novel voice-first experiences without worrying about real-time infrastructure.
The winner of this track will be the team that makes the best use of LiveKit. Use our open-source framework and cloud services to build an agent that embeds into your application (or that your application is built around).
The winning project will demonstrate a functioning agent that uses more complex features of the framework to create a unique and technically interesting application.
If you want to talk to your project—or use your project with your voice in any way—LiveKit is the right tool to use.
Kairo is the world's first AI-native IDE platform for end to end secure smart contract development. As blockchain technology evolves from experimental protocols to critical financial infrastructure, security must transform from an afterthought to a foundational principle embedded in every line of code.
Your challenge is to build a blockchain-based application, protocol, or developer tool using Kairo.
Teams can explore any on-chain use case — infrastructure, tooling, protocols, or applications — and should use Kairo during development to design, iterate, and validate their on-chain logic.
The focus is on building something real, production-minded, and thoughtfully engineered.
1st Place: Apple Watches + Interview Internship + $1000 Kairo Tokens
2nd Place: Nintendo Game Boy + Internship Interview + $1000 Kairo Tokens
3rd Place: Internship Interview + $1000 Kairo Tokens
Arize is an observability and evaluations platform for developers. If you’re building an AI agent (and who isn’t?!), our open source Phoenix software works automatically with all popular agent development frameworks to instrument your code and let you see what your agent is doing and why. Once you’ve got observability in place, you can optimize your application by A/B testing variations of the prompts your agent sends to LLMs and automatically measure the results.
Put simply: if you’re trying to build an agent that actually works, you need to debug it with more than just vibes. Arize Phoenix is the free, open-source, developer-focused solution to that problem.
You’re almost certainly building an AI agent, and you’re probably using a popular framework like LangChain, LlamaIndex, CrewAI, or Mastra to build it. With just a few lines of code, you can automatically instrument an agent built with these (and more) frameworks.
Your challenge is to bring Arize into the mix to help understand what your agent is doing and why, and then use our prompt playground or experiments to get your agent from “sort of works” to “always works.” Arize can help your project no matter what it does.
We will prioritize projects that demonstrate:
No matter what your agent does, your workflow with Arize will be the same: capture traces of what it’s doing, inspect them, and iteratively improve your agent by modifying prompts and evaluating outcomes.
$1,000 for Best Use of Arize Phoenix
Seda is a social media platform for deep research and collaborative discovery.
Users conduct deep research on any topic they care about—similar to ChatGPT Deep Research—including policy and government, law, AI, sports, music, art, history, philosophy, finance, science, global events, markets, prediction markets, and emerging ideas. They then post their research and discoveries directly to a shared feed for friends and the broader community to explore.
On Seda, users can follow others, see what they’re researching, read posts, comment, like, challenge ideas, debate interpretations, expand on existing work, share opinions, and even fork posts to explore alternative research directions through additional investigation.
Over time, this creates a growing, interconnected body of real-time research and discovery around the world’s curiosities. Unlike traditional social platforms such as X that prioritize speed and engagement, Seda is designed to preserve context, reasoning, and evidence—allowing ideas to develop collaboratively and creating a stronger truth engine for the internet.
In order to build anything in this world, you have to research and discover.
For this challenge, Seda is hosting a Researchathon.
Participants will come together, form teams, and research anything they are interested in or actively working on using the Seda Deep Research Engine. Teams will then post their discoveries to the Seda social media platform.
Participants will compete for points, attention, fame, and over $2,500 in total prizes based on research activity, engagement, and collaboration during the hackathon.
The Researchathon runs from 12:00pm EST Jan 17th to 1:00pm EST Jan 18th.
Points are awarded for actions taken within Seda during the Researchathon timeframe.
Participants do not receive points for liking or commenting on their own posts.
At the end of the event, point totals are tracked automatically. For the following three awards, participants will submit values manually for judges to review
Registration (required):
Registration Form & App Installation Guide
Invite Code: 860052
Download the Seda App:
If participants run out of free research credits during the event, they will be upgraded to Seda Pro upon contacting the sponsor via Slack or in person.
Leaderboard:
Live Researchathon Leaderboard
Suggested research areas include:
The Seda team will be present at a sponsored table throughout the hackathon and available in Slack to help participants with questions or issues.
A live, public leaderboard will be displayed during the Researchathon so teams can track point totals in real time.
It is critical that participants follow the registration instructions precisely, as team member registration is how points are tracked for leaderboard ranking and final submissions.
Wood Wide AI is an API-first numeric reasoning layer for structured, tabular, time-series, and event data. It transforms raw tables into reusable numeric intelligence, enabling developers to generate predictions, detect anomalies, and uncover meaningful segments. These insights can then be composed into decision-ready workflows.
Wood Wide is designed for applications where numeric correctness, interpretability, and speed are essential—especially in real-world environments where decisions must be made under concrete constraints.
Build a numeric decision workflow using Wood Wide AI.
Using Wood Wide APIs, participants are asked to build an application that reasons over realistic structured data and supports a clear, real-world decision that a person or system would actually make. The focus is on grounded workflows rather than one-off analyses.
Strong submissions focus on:
Your project should:
The strongest projects clearly answer the following questions:
This track emphasizes turning structured data into decisions people can trust. Judges will prioritize clarity, numeric correctness, interpretability, and real-world usefulness.
All participants receive:
Example use cases include, but are not limited to:
Teams are encouraged to chain insights together meaningfully—for example:
The winning teams will receive:
1st Place: $750
2nd Place: $500
3rd Place: $250
Wood Wide technology perks and swag
The Wood Wide team will be actively involved throughout the hackathon, providing guidance, technical support, and feedback to participating teams.
DevSwarm is what hackathon winners use to ship in half the time.
Run multiple AI coding agents in true parallel—Claude Code, Codex, Gemini, Amazon Q, Aider, Goose, or any CLI agent you want—all at once. Each agent works on its own isolated Git branch, so you can build different features simultaneously without context switching.
One interface. Switch between agents with a keystroke. Push to GitHub without leaving your terminal. Your IDE, build tools, and workflow stay exactly where they are.
24 hours goes fast. Waiting on one agent at a time is a bottleneck you can’t afford. DevSwarm lets you move as fast as you can think.
Use DevSwarm to build your NexHacks project.
That’s it.
Download DevSwarm and build your project end to end. Run multiple AI coding assistants in parallel on isolated branches. Ship quality code in half the time and credits.
You’re competing in an AI-focused hackathon—might as well use the best tool.
In just 24 hours, waiting on one agent at a time is a bottleneck you can’t afford. This track is for teams who want to move faster than the rest.
Projects should:
We will prioritize projects that demonstrate:
Overshoot enables developers to build AI applications that can see the world and act on it in real time.
The Overshoot API allows you to connect a video stream—such as a phone camera, webcam, livestream, screen share, or YouTube video—to any Vision Language Model and run inference on it. All video-stream handling is abstracted away, making integration as simple as a single line of code.
Overshoot has been in closed beta since inception, and this hackathon marks the first time the API is being opened to the public. The team is excited to see what developers create with real-time vision intelligence.
The world is your oyster.
Vision-capable LLMs have recently unlocked the ability to understand video, opening the door to a wide range of real-time vision applications.
Participants are encouraged to use their creativity to build anything they are excited about—whether that’s a UFC live commentator, an AI that watches your pet, or an assistant that monitors your screen while you study.
Projects will be judged primarily on:
This is a new and rapidly evolving space, and fun, imaginative ideas are strongly encouraged.
SDK: Provided by Overshoot
Documentation: Provided by Overshoot
The Overshoot team will be available throughout the hackathon to help participants with whatever they are building.
Overshoot will have a table at the hackathon.
Top-quality Overshoot merchandise will be distributed to participants who sign up for the track.
The team will also present during the opening ceremony and will share slides with organizers ahead of time.
TRAE is a next-generation, AI-native Integrated Development Environment (IDE) launched by ByteDance in early 2025. It is designed to act as an "AI development engineer" rather than just a coding assistant, supporting the entire software development lifecycle from, requirements analysis to deployment.
The best use of TRAE :)
This was provided in an email by Trae.
1st place: $1000
2nd Place: $500
3rd place: $500
For more information, visit: https://www.mlh.com/events/nexhacks/prizes
Google Swag Kits
For more information, visit: https://www.mlh.com/events/nexhacks/prizes
For more information, visit: https://www.mlh.com/events/nexhacks/prizes
For more information, visit: https://www.mlh.com/events/nexhacks/prizes
LeanMCP handles deployment and observability for MCP servers and ChatGPT Apps, eliminating the pain of manual setup, debugging, scaling, and protocol compliance. Teams can go from concept to production in hours instead of weeks, with built-in auto-scaling, fault tolerance, isolated tasks, and low-latency access across 30+ regions.
The platform integrates AI with authentication, prompts, and application management, alongside real-time monitoring for performance and usage. It is designed for developers and teams at organizations like NVIDIA, Meta, Google, and Salesforce who need to host MCPs and ChatGPT Apps for internal or external use.
LeanMCP offers multi-client support (Claude, Cursor, Windsurf), production-grade security and scalability, and an open-source TypeScript SDK (@leanmcp/core) with decorators that enable rapid tool definitions. With minimal boilerplate and simple CLI commands such as leanmcp init and deploy, developers can focus on building AI agents and tools without infrastructure overhead.
Your challenge is to solve a real-world problem using MCP and deploy the MCP on the LeanMCP platform. Participants are encouraged to use the LeanMCP SDK to build their solution.
One example problem involves improving how tools like Claude Code and Cursor retrieve documentation. These tools often load entire documentation files into context, causing context bloat, increased token usage, and reduced accuracy.
A potential solution is a Documentation MCP that intelligently selects and serves only the relevant context needed for a specific task, reducing token costs while improving speed and code-generation accuracy.
Projects will be evaluated primarily on:
Documentation: https://docs.leanmcp.com/
CLI: https://www.npmjs.com/package/@leanmcp/cli
SDK: https://github.com/leanmcp/leanmcp-sdk
MCP Auth & Payments: https://www.npmjs.com/package/@leanmcp/auth
ChatGPT Apps UI: https://www.npmjs.com/package/@leanmcp/ui
API Keys: https://ship.leanmcp.com/api-keys
Discord: https://discord.com/invite/DsRcA3GwPy (contact lu_xian, dheerajpai, or kushagra525 for credits)
Example Repositories:
https://github.com/LeanMCP/leanmcp-sdk/tree/main/examples
LeanMCP mentors will be available online 24/7 throughout the hackathon via Discord.
Discord: https://discord.com/invite/DsRcA3GwPy
Mentors: lu_xian, dheerajpai, kushagra525
LeanMCP will also give a 1–2 minute presentation during the opening ceremony and provide up to three slides for the master presentation deck.
Wispr Flow is a speech-to-text product focused on fast, accurate, and seamless voice input. Wispr enables users to convert speech into text efficiently, supporting real-time workflows across writing, coding, and communication.
Wispr Flow is used by developers, builders, and teams who want to move faster by speaking instead of typing, and is actively used in hackathons and technical projects.
Participants are encouraged to build projects that incorporate speech-to-text technology using Wispr Flow.
This can include any creative or practical use of voice input, speech-to-text workflows, or applications that demonstrate how spoken language can improve speed, accessibility, or productivity in real-world software.
Wispr provides the following resources to hackathon participants and organizers:
ElevenLabs is the most realistic voice AI platform, powering millions of developers, creators, and enterprises. From low-latency conversational agents to the leading AI voice generator for voiceovers and audiobooks, ElevenLabs enables natural, expressive, and scalable voice experiences.
Build an innovative application that leverages ElevenLabs’ voice AI technology to solve a real problem or create a compelling user experience. Your project should demonstrate creative and effective use of our technology.
You can access all ElevenLabs products through either our website UI or API.
More information: https://www.mlh.com/events/nexhacks/prizes