General
Tracks
Rules
Judging
Schedule
Tracks
main tracks
[CLOSE]
Dev Tools

Ship a developer tool that saves time, reduces mistakes, or improves observability; something you’d genuinely want in your own workflow.

challenge prompt
Project Requirements
Evaluation Criteria
Resources
Examples
Prizes

1st place: $2,000
2nd place: $1,000
3rd place: $500

Sponsor involvement
Dev Tools
[CLOSE]
Healthcare

Build a solution that improves healthcare experiences by reducing admin burden, improving coordination, or helping people navigate care without requiring medical diagnosis.

challenge prompt
Project Requirements
Evaluation Criteria
Resources
Examples
Prizes

1st place: $2,000
2nd place: $1,000
3rd place: $500

Sponsor involvement
Healthcare
[CLOSE]
Education

Create a tool that improves learning outcomes by making education more personalized, accessible, and engaging for students, educators, or self-learners.

challenge prompt
Project Requirements
Evaluation Criteria
Resources
Examples
Prizes

1st place: $2,000
2nd place: $1,000
3rd place: $500

Sponsor involvement
Education
[CLOSE]
General

Build a product that meaningfully improves a real-world workflow for students, creators, or communities in under 24 hours. Prioritize usefulness, clarity, and a tight demo over breadth.

challenge prompt
Project Requirements
Evaluation Criteria
Resources
Examples
Prizes

1st place: $2,000
2nd place: $1,000
3rd place: $500

Sponsor involvement
General
sponsor tracks
[CLOSE]
The Token Company

The Token Company (YC W26) is the first commercial lab building proprietary machine learning models for compressing LLM input by removing the least significant tokens.

The technology enables companies to fit more context into LLMs, save on input token costs, and improve model performance without affecting output.

As compute costs continue to scale, the AI industry is hitting a wall where high-level inference becomes an expensive luxury. This creates a growing gap in who can afford to build and use AI.

We believe the path forward isn’t just more hardware, but radical efficiency through compression. Historically, every major medium—from JPEGs for images to MP3s for audio—had to be compressed to become scalable. AI input will follow the same path.

By distilling prompts down to their most significant tokens, we bypass the hardware bottleneck and make massive context windows scalable.

The Token Company is currently in stealth and closed a $2.2M pre-seed at a $15M valuation in January 2026 from Y Combinator, SV Angels, Inception Fund, Visionaries Club, and founders behind Hugging Face, ZFellows, Supercell, and AMD Silo AI.

Try the demo at thetokencompany.com.

challenge prompt

We are taking submissions for two challenge categories:

  1. Alternative Compression Model
    Build the most innovative LLM input compression model or algorithm.
  2. Innovative Application for Compression
    Build the most creative application using The Token Company’s bear-1 compression model.

The Token Company’s classification model bear-1 offers up to 60% token savings while improving accuracy by +1.1% on the LongBench V2 benchmark.

Project Requirements

Submissions should fall into one of the two tracks:

  • A novel compression model or algorithm for LLM inputs
  • An application that meaningfully leverages compressed inputs to improve cost, latency, or context depth
Evaluation Criteria
  • Innovation and novelty of approach
  • Effectiveness of compression or application design
  • Clear demonstration of benefits (cost, latency, context, or performance)
Resources
Examples
  • Compression proxies for developer tools like Claude Code
  • Customer support systems with expanded context windows
  • New prompt representations or compression-first LLM workflows
Prizes
  •    Track 1 – Alternative Compression Model:
       Claude Max (5× subscriptions) for 6 months for each winning team member (≈ $600 per person),
    • 1st Place: $1000
    • 2nd: $500
    • 3rd: $500
  •  
  •    Track 2 – Innovative Application:
       Claude Pro subscriptions for 6 months for each winning team member (≈ $120 per person)  
    • 1st Place: $1000
    • 2nd: $500
    • 3rd: $500

Exceptional teams may also be considered for internships or recruitment opportunities.

Sponsor involvement

The Token Company team will be available for questions during the event.

The company is actively hiring top ML talent interested in researching and building novel compression technologies.

If you’re interested in working with the team in San Francisco, reach out atteam@thetokencompany.com.

The Token Company
[CLOSE]
Polymarket

Polymarket is the world's largest prediction market where traders predict the outcome of future events across politics, current events, pop culture, and more, winning when they're right. As traders react to breaking news in real-time, market prices become the most accurate gauge of event likelihood, which institutions, individuals, and the media rely on to report news and better understand the future. With billions of dollars in predictions made in 2025 and exclusive partnerships with the Wall Street Journal, UFC, Golden Globes, and New York Rangers, Polymarket has established itself as the definitive platform for real-time forecasting and market-driven insights.

challenge prompt

Prediction markets are powerful financial derivatives, especially for hedging exposure and constructing sophisticated strategies, but most users still interact with them using relatively simple interfaces. In traditional finance, traders rely on sophisticated tooling like profit & loss curves, scenario modeling, time-based payoff visualizations, and portfolio hedging views to deeply understand risk and opportunity before placing a trade. These tools are largely missing in prediction markets today.

Your challenge is to design and build advanced trading tools for Polymarket that help users better understand, visualize, and manage risk across time, price, and probability.

Participants should build applications that leverage Polymarket markets and data to create TradFi-style trading experiences, such as:

  • Profit & loss visualizations across different probability outcomes and time horizons e.g. https://www.optionsprofitcalculator.com/
  • Hedging tools that pair prediction markets with other speculative positions (e.g., options, perps, spot, or synthetic exposure)
  • Scenario analysis tools that show how a position performs if an event resolves sooner vs later
  • Portfolio or strategy views that combine multiple markets into a single payoff graph
  • Educational visualizations that make complex strategies easier to understand and trade

The goal is to unlock more sophisticated trading behavior by making prediction markets easier to reason about, experiment with, and trust, especially for users coming from traditional trading or crypto-native derivatives.

Project Requirements
  • Use real or realistic Polymarket market data
    This can include live markets, historical market data, or clearly labeled simulated data derived from real Polymarket contracts. Assumptions and simplifications should be made explicit.
  • Provide a functional demo with clear user interaction
    Submissions should allow a user to input positions, strategies, or parameters (e.g., probabilities, time horizons, multiple markets) and see outputs update dynamically.
  • Produce concrete analytical or visual outputs
    Examples include (but are not limited to): payoff curves, scenario trees, time-based profit/loss charts, portfolio payoff surfaces, correlation views, inefficiency indicators, or strategy comparisons across markets.
  • Be grounded in real trading use cases
    The tool should plausibly help a trader make better decisions, manage risk, identify inefficiencies, or understand tradeoffs before placing a trade.

Submissions can come in various forms, such as but not limited to web apps, dashboards, visual simulators, or analytical tools.

Evaluation Criteria

We will prioritize projects that demonstrate:

  1. Quality of insight and correctness of modeling
    Sound reasoning around probabilities, payoffs, correlations, and resolution timing. Clear, defensible assumptions matter more than complexity for its own sake.
  2. Strength of visualization and user experience
    Interfaces that make complex strategies, risks, and tradeoffs intuitive and easy to understand. Great UX that helps users see what happens across time, price, and outcomes will be heavily rewarded.
  3. Technical depth and execution
    Thoughtful use of data, calculations, and system design. Bonus for handling multi-market interactions, non-mutually exclusive events, or correlated outcomes in a robust way.
  4. Real-world trading applicability
    The tool should plausibly help real traders make better decisions, manage risk, or identify inefficiencies.
  5. Creativity and originality
    Novel approaches to prediction market tooling, strategy construction, or educational visualization. We value new mental models and workflows, not clones of existing dashboards.
  6. Clarity of explanation
    Teams should be able to clearly explain what their tool does, why it matters, and how a trader would actually use it
Examples

Here are some example ideas to help kickstart the building process.

  1. Leveraged Perps Position + Prediction Market Hedge Visualizer
  2. A visual tool that allows a user to model a leveraged crypto position (e.g., a perpetual futures trade on Hyperliquid) alongside one or more Polymarket contracts used as a hedge. The tool would show price and time windows where the combined strategy is profitable, breakeven, or loss-making, helping users understand how prediction markets can cap downside, add convexity, or shift risk across different resolution horizons.
  3. Cross-Market Strategy & Inefficiency Analyzer
    A dashboard that analyzes related or non-mutually exclusive markets (e.g., short-term vs long-term price thresholds) to visualize combined payoffs, detect relative mispricing, and highlight potentially inefficient market relationships
  4. Correlation-Aware Trade Recommendation Engine
    A tool that evaluates correlated events (e.g., ETH vs BTC price moves, macro events across assets) and suggests more capital-efficient markets or alternative contracts based on implied probabilities and historical relationships.
    Note: This does not need to be limited to finance prediction markets. For example, there are many political prediction markets that have correlations with finance and culture markets!

There are many unlockable ideas in this space; don’t feel limited to the examples above. We encourage creative approaches that help users better understand risk, probability, and payoff before placing a trade.

Prizes
  • 1st Place Team: $1,250 cash & Apple Airpod Maxes for all
  • 2nd Place Team: $750 & Internship Interview
  • 3rd Place Team: $500 & Internship Interview
Sponsor involvement
Polymarket
[CLOSE]
LiveKit

LiveKit is an open-source framework for building real-time voice agents that people can speak to naturally. It solves the hardest parts of voice agents—low-latency audio and real-time orchestration—in a unified system.

Built for developers and teams, LiveKit makes it easy to compose and control all the moving pieces required for a responsive, conversational agent. Hackers can bring their own models, tools, and logic, experiment freely, and focus on creating novel voice-first experiences without worrying about real-time infrastructure.

challenge prompt

The winner of this track will be the team that makes the best use of LiveKit. Use our open-source framework and cloud services to build an agent that embeds into your application (or that your application is built around).

The winning project will demonstrate a functioning agent that uses more complex features of the framework to create a unique and technically interesting application.

Project Requirements
  • Use a LiveKit product in a major, core way.
  • Possible tools include:
  • Projects where LiveKit is peripheral to the main idea are unlikely to win.
Evaluation Criteria
  • Uniqueness: Novel applications or uses of LiveKit.
  • Technical Depth: Use of advanced or complex LiveKit features, or multiple LiveKit products.
  • Polish: Seamless integration, ease of use, and improved interface quality through LiveKit.
Resources
Examples

If you want to talk to your project—or use your project with your voice in any way—LiveKit is the right tool to use.

Prizes
  • Honorable Mentions: LiveKit-branded Owala water bottles.
Sponsor involvement
  • LiveKit will table at the hackathon
  • LiveKit will distribute merchandise
  • LiveKit team members will mentor teams using LiveKit
  • A workshop will be run near the beginning of the hackathon
  • Slides will be shared closer to the event date
LiveKit
[CLOSE]
Kairo

Kairo is the world's first AI-native IDE platform for end to end secure smart contract development. As blockchain technology evolves from experimental protocols to critical financial infrastructure, security must transform from an afterthought to a foundational principle embedded in every line of code.

challenge prompt

Your challenge is to build a blockchain-based application, protocol, or developer tool using Kairo.

Teams can explore any on-chain use case — infrastructure, tooling, protocols, or applications — and should use Kairo during development to design, iterate, and validate their on-chain logic.

The focus is on building something real, production-minded, and thoughtfully engineered.

Project Requirements
  • Use Kairo during development
    Projects should be built using Kairo as part of the development workflow.
  • Include at least one meaningful on-chain component
    This can be a contract, program, or protocol-level logic with real relevance to the project.
  • Demonstrate how Kairo influenced development
    Show how Kairo impacted design decisions, debugging, iteration speed, or security validation.
  • Provide a working demo or clear simulation
    Submissions should include a functional demo or a clearly explained simulation of the system.
  • Clearly explain the problem and why blockchain is used
    Teams must articulate what they’re solving and why an on-chain approach is necessary or meaningfully better.
Evaluation Criteria
Resources
Examples
Prizes

1st Place: Apple Watches + Interview Internship + $1000 Kairo Tokens

2nd Place: Nintendo Game Boy + Internship Interview + $1000 Kairo Tokens

3rd Place: Internship Interview + $1000 Kairo Tokens

Sponsor involvement
Kairo
[CLOSE]
Arize

Arize is an observability and evaluations platform for developers. If you’re building an AI agent (and who isn’t?!), our open source Phoenix software works automatically with all popular agent development frameworks to instrument your code and let you see what your agent is doing and why. Once you’ve got observability in place, you can optimize your application by A/B testing variations of the prompts your agent sends to LLMs and automatically measure the results.

Put simply: if you’re trying to build an agent that actually works, you need to debug it with more than just vibes. Arize Phoenix is the free, open-source, developer-focused solution to that problem.

challenge prompt

You’re almost certainly building an AI agent, and you’re probably using a popular framework like LangChain, LlamaIndex, CrewAI, or Mastra to build it. With just a few lines of code, you can automatically instrument an agent built with these (and more) frameworks.

Your challenge is to bring Arize into the mix to help understand what your agent is doing and why, and then use our prompt playground or experiments to get your agent from “sort of works” to “always works.” Arize can help your project no matter what it does.

Project Requirements
  • Instrument your agent for Arize, either using automatic framework instrumentation or by sending traces manually.
  • Use either Phoenix Cloud or Phoenix running on the command-line to capture traces and display them in the UI.
  • Demonstrate use of the platform to A/B test prompts and improve your agent’s performance.
Evaluation Criteria

We will prioritize projects that demonstrate:

  • Non-trivial amounts of trace data. Use Arize during development and capture traces as you go.
  • Actual improvements in agent performance as a result of using Arize to optimize.
  • Bonus: Creating a systematic dataset and optimizing via experiments (stretch goal).
Resources
  • Arize Phoenix documentation
  • Quick start guides in Python and TypeScript
  • Phoenix is free to use locally or via Phoenix Cloud
Examples

No matter what your agent does, your workflow with Arize will be the same: capture traces of what it’s doing, inspect them, and iteratively improve your agent by modifying prompts and evaluating outcomes.

Prizes

$1,000 for Best Use of Arize Phoenix

Sponsor involvement
  • Arize team members will be present at a sponsor table.
  • The team will be available to mentor during the hackathon.
  • Introductory slides will be shared closer to the event date.
Arize
[CLOSE]
Seda

Seda is a social media platform for deep research and collaborative discovery.

Users conduct deep research on any topic they care about—similar to ChatGPT Deep Research—including policy and government, law, AI, sports, music, art, history, philosophy, finance, science, global events, markets, prediction markets, and emerging ideas. They then post their research and discoveries directly to a shared feed for friends and the broader community to explore.

On Seda, users can follow others, see what they’re researching, read posts, comment, like, challenge ideas, debate interpretations, expand on existing work, share opinions, and even fork posts to explore alternative research directions through additional investigation.

Over time, this creates a growing, interconnected body of real-time research and discovery around the world’s curiosities. Unlike traditional social platforms such as X that prioritize speed and engagement, Seda is designed to preserve context, reasoning, and evidence—allowing ideas to develop collaboratively and creating a stronger truth engine for the internet.

challenge prompt

In order to build anything in this world, you have to research and discover.

For this challenge, Seda is hosting a Researchathon.

Participants will come together, form teams, and research anything they are interested in or actively working on using the Seda Deep Research Engine. Teams will then post their discoveries to the Seda social media platform.

Participants will compete for points, attention, fame, and over $2,500 in total prizes based on research activity, engagement, and collaboration during the hackathon.

The Researchathon runs from 12:00pm EST Jan 17th to 1:00pm EST Jan 18th.

Project Requirements
Evaluation Criteria

Points are awarded for actions taken within Seda during the Researchathon timeframe.

     
  • +2 per research completed
  •  
  • +5 per post to the feed (must include nexhacks, case-sensitive)
  •  
  • +1 per like received on eligible posts (30-point cap per post)
  •  
  • +3 per like given on eligible posts
  •  
  • +2 per comment received on your eligible posts (30-point cap per post)
  •  
  • +3 per comment given on eligible posts
  •  
  • +2 per follower gained during the Researchathon
  •  
  • +10 per friend invited who registers (tracked automatically via invite codes)

Participants do not receive points for liking or commenting on their own posts.

At the end of the event, point totals are tracked automatically. For the following three awards, participants will submit values manually for judges to review

Resources

Registration (required):
Registration Form & App Installation Guide

Invite Code: 860052

Download the Seda App:

If participants run out of free research credits during the event, they will be upgraded to Seda Pro upon contacting the sponsor via Slack or in person.

Leaderboard:
Live Researchathon Leaderboard

Examples

Suggested research areas include:

  • Current Events
  • AI
  • Prediction Markets
  • Religion
  • Philosophy
  • History
  • Politics
Prizes
  • $1,000 – Most Points Team
  • $750 – 2nd Most Points Team
  • $250 – Best Debate / Challenge Thread (most constructive back-and-forth)
  • $250 – Most Novel Post (judges & mentors pick; novelty evaluated based on user findings, caption insights, research query, and AI research output)
  • $250 – Deepest Research Chain Researched (single branch only; reviewed for anti-spam)
Sponsor involvement

The Seda team will be present at a sponsored table throughout the hackathon and available in Slack to help participants with questions or issues.

A live, public leaderboard will be displayed during the Researchathon so teams can track point totals in real time.

It is critical that participants follow the registration instructions precisely, as team member registration is how points are tracked for leaderboard ranking and final submissions.

Seda
[CLOSE]
Wood Wide AI

Wood Wide AI is an API-first numeric reasoning layer for structured, tabular, time-series, and event data. It transforms raw tables into reusable numeric intelligence, enabling developers to generate predictions, detect anomalies, and uncover meaningful segments. These insights can then be composed into decision-ready workflows.

Wood Wide is designed for applications where numeric correctness, interpretability, and speed are essential—especially in real-world environments where decisions must be made under concrete constraints.

challenge prompt

Build a numeric decision workflow using Wood Wide AI.

Using Wood Wide APIs, participants are asked to build an application that reasons over realistic structured data and supports a clear, real-world decision that a person or system would actually make. The focus is on grounded workflows rather than one-off analyses.

Strong submissions focus on:

  • A clearly defined user
  • A concrete decision moment
  • How numeric insights help that user act with confidence
Project Requirements

Your project should:

  • Operate on realistic structured data
  • Support a real decision, not just a metric or visualization
  • Combine multiple numeric insights into a coherent workflow
  • Produce outputs that are interpretable and actionable
Evaluation Criteria

The strongest projects clearly answer the following questions:

  • What happened?
  • Why does it matter?
  • What should happen next?

This track emphasizes turning structured data into decisions people can trust. Judges will prioritize clarity, numeric correctness, interpretability, and real-world usefulness.

Resources

All participants receive:

  • Free Wood Wide API credits
  • Access to documentation
  • Starter scripts
  • Direct access to the Wood Wide team for guidance throughout the hackathon
Examples

Example use cases include, but are not limited to:

     
  • Customer churn prediction and prioritization
  •  
  • Patient risk triage
  •  
  • Demand forecasting
  •  
  • Fraud or anomaly detection
  •  
  • Operational decision support systems

Teams are encouraged to chain insights together meaningfully—for example:

     
  • Segmenting entities and then predicting outcomes
  •  
  • Flagging anomalies and assessing downstream impact
  •  
  • Identifying what is unusual within a specific subgroup
Prizes

The winning teams will receive:

1st Place: $750

2nd Place: $500

3rd Place: $250

     

Wood Wide technology perks and swag

Sponsor involvement

The Wood Wide team will be actively involved throughout the hackathon, providing guidance, technical support, and feedback to participating teams.

Wood Wide AI
[CLOSE]
DevSwarm

DevSwarm is what hackathon winners use to ship in half the time.

Run multiple AI coding agents in true parallel—Claude Code, Codex, Gemini, Amazon Q, Aider, Goose, or any CLI agent you want—all at once. Each agent works on its own isolated Git branch, so you can build different features simultaneously without context switching.

One interface. Switch between agents with a keystroke. Push to GitHub without leaving your terminal. Your IDE, build tools, and workflow stay exactly where they are.

24 hours goes fast. Waiting on one agent at a time is a bottleneck you can’t afford. DevSwarm lets you move as fast as you can think.

challenge prompt

Use DevSwarm to build your NexHacks project.

That’s it.

Download DevSwarm and build your project end to end. Run multiple AI coding assistants in parallel on isolated branches. Ship quality code in half the time and credits.

You’re competing in an AI-focused hackathon—might as well use the best tool.

In just 24 hours, waiting on one agent at a time is a bottleneck you can’t afford. This track is for teams who want to move faster than the rest.

Project Requirements

Projects should:

  • Produce a working demo
  • Show parallel workflow in action (screen recording, video, or README walkthrough)
  • Share your demo on LinkedIn and tag DevSwarm
Evaluation Criteria

We will prioritize projects that demonstrate:

  • Technical ambition relative to time
  • Clear explanations
  • Functionality of the project
Examples
  • That side project you’ve always wanted to build
  • Your billion dollar startup
  • Something you’d usually need a full team of devs for
Prizes
     
  • 1st Place: $1,000 + interview at DevSwarm to join the core product team
  •  
  • 2nd Place: $500 + 1-year subscription to an AI coding tool of your choice
  •  
  • 3rd Place: $500 + LOFREE Flow 2 84 keyboard or 1-year subscription to an AI coding tool of your choice
  •  
  • All participants: Free Student Plan + invite to the DevSwarm GTM Internship Program
Sponsor involvement
DevSwarm
[CLOSE]
Overshoot

Overshoot enables developers to build AI applications that can see the world and act on it in real time.

The Overshoot API allows you to connect a video stream—such as a phone camera, webcam, livestream, screen share, or YouTube video—to any Vision Language Model and run inference on it. All video-stream handling is abstracted away, making integration as simple as a single line of code.

Overshoot has been in closed beta since inception, and this hackathon marks the first time the API is being opened to the public. The team is excited to see what developers create with real-time vision intelligence.

challenge prompt

The world is your oyster.

Vision-capable LLMs have recently unlocked the ability to understand video, opening the door to a wide range of real-time vision applications.

Participants are encouraged to use their creativity to build anything they are excited about—whether that’s a UFC live commentator, an AI that watches your pet, or an assistant that monitors your screen while you study.

Project Requirements
  • The project must use the Overshoot API.
  • The project should roughly work. The team understands the timeline is tight and values genuine attempts.
Evaluation Criteria

Projects will be judged primarily on:

  • Creativity

This is a new and rapidly evolving space, and fun, imaginative ideas are strongly encouraged.

Resources

SDK: Provided by Overshoot

Documentation: Provided by Overshoot

The Overshoot team will be available throughout the hackathon to help participants with whatever they are building.

Examples
  • Live sports commentator
  • Fortnite commentator or roast bot
  • Pet watcher
  • Fall detection in industrial environments
  • Leak detection
  • Open-ended detection in home cameras
  • AI assistant with browser or screen monitoring
  • Baby monitor
  • Camera monitoring for home-bound individuals
  • Kitchen copilot guide
  • Gym form or sports coach
  • Real-time sports livestream highlights and analytics
  • Accessibility tools for blind or low-vision users
  • Live rap generation based on video input (e.g., collaboration with Suno)
Prizes
     
  • Meta / Ray-Ban smart glasses
  •  
  • Times Square billboard feature
  •  
  • Hinge+ subscription and Clash Royale pass for one year
Sponsor involvement

Overshoot will have a table at the hackathon.

Top-quality Overshoot merchandise will be distributed to participants who sign up for the track.

The team will also present during the opening ceremony and will share slides with organizers ahead of time.

Overshoot
[CLOSE]
TRAE - Bytedance

TRAE is a next-generation, AI-native Integrated Development Environment (IDE) launched by ByteDance in early 2025. It is designed to act as an "AI development engineer" rather than just a coding assistant, supporting the entire software development lifecycle from, requirements analysis to deployment.

challenge prompt

The best use of TRAE :)

Project Requirements
Evaluation Criteria
Resources

This was provided in an email by Trae.

Examples
Prizes

1st place: $1000

2nd Place: $500

3rd place: $500

Sponsor involvement
TRAE - Bytedance
Add-on tracks
[CLOSE]
Best UI/UX ($1000)
challenge prompt
Project Requirements
Evaluation Criteria
Resources
Examples
Prizes
Sponsor involvement
Best UI/UX ($1000)
[CLOSE]
Best Technical Difficulty ($1000)
challenge prompt
Project Requirements
Evaluation Criteria
Resources
Examples
Prizes
Sponsor involvement
Best Technical Difficulty ($1000)
[CLOSE]
Most Impactful ($1000)
challenge prompt
Project Requirements
Evaluation Criteria
Resources
Examples
Prizes
Sponsor involvement
Most Impactful ($1000)
[CLOSE]
Best Solo Hacker ($1000)
challenge prompt
Project Requirements
Evaluation Criteria
Resources
Examples
Prizes
Sponsor involvement
Best Solo Hacker ($1000)
[CLOSE]
Best Use of Gemini API (MLH)

challenge prompt

For more information, visit: https://www.mlh.com/events/nexhacks/prizes

Project Requirements
Evaluation Criteria
Resources
Examples
Prizes

Google Swag Kits

Sponsor involvement
Best Use of Gemini API (MLH)
[CLOSE]
Best Use of Solana (MLH)
challenge prompt

For more information, visit: https://www.mlh.com/events/nexhacks/prizes

Project Requirements
Evaluation Criteria
Resources
Examples
Prizes
Sponsor involvement
Best Use of Solana (MLH)
[CLOSE]
Best Use of DigitalOcean (MLH)
challenge prompt

For more information, visit: https://www.mlh.com/events/nexhacks/prizes

Project Requirements
Evaluation Criteria
Resources
Examples
Prizes
Sponsor involvement
Best Use of DigitalOcean (MLH)
[CLOSE]
Best Use of MongoDB Atlas (MLH)
challenge prompt

For more information, visit: https://www.mlh.com/events/nexhacks/prizes

Project Requirements
Evaluation Criteria
Resources
Examples
Prizes
Sponsor involvement
Best Use of MongoDB Atlas (MLH)
[CLOSE]
LeanMCP

LeanMCP handles deployment and observability for MCP servers and ChatGPT Apps, eliminating the pain of manual setup, debugging, scaling, and protocol compliance. Teams can go from concept to production in hours instead of weeks, with built-in auto-scaling, fault tolerance, isolated tasks, and low-latency access across 30+ regions.

The platform integrates AI with authentication, prompts, and application management, alongside real-time monitoring for performance and usage. It is designed for developers and teams at organizations like NVIDIA, Meta, Google, and Salesforce who need to host MCPs and ChatGPT Apps for internal or external use.

LeanMCP offers multi-client support (Claude, Cursor, Windsurf), production-grade security and scalability, and an open-source TypeScript SDK (@leanmcp/core) with decorators that enable rapid tool definitions. With minimal boilerplate and simple CLI commands such as leanmcp init and deploy, developers can focus on building AI agents and tools without infrastructure overhead.

challenge prompt

Your challenge is to solve a real-world problem using MCP and deploy the MCP on the LeanMCP platform. Participants are encouraged to use the LeanMCP SDK to build their solution.

One example problem involves improving how tools like Claude Code and Cursor retrieve documentation. These tools often load entire documentation files into context, causing context bloat, increased token usage, and reduced accuracy.

A potential solution is a Documentation MCP that intelligently selects and serves only the relevant context needed for a specific task, reducing token costs while improving speed and code-generation accuracy.

Project Requirements
  • Required: The MCP must be deployed on the LeanMCP platform.
  • Optional: Use the LeanMCP SDK for building the MCP.
  • Optional but recommended: Use the LeanMCP AI Gateway for LLM access (each participant will receive $25 in token credits; request via Discord).
Evaluation Criteria

Projects will be evaluated primarily on:

  • Real-world applicability
  • Novelty of the idea
  • Technical depth and execution
Examples

Example Repositories:

https://github.com/LeanMCP/leanmcp-sdk/tree/main/examples

  • Documentation Agent (recommended): Converts documents into MCPs that provide task-specific context. Submissions achieving 90% of LeanMCP’s internal benchmarks qualify for interviews.
  • Trading Agent: An automated trading agent for stock or prediction markets (e.g., Kalshi, Polymarket) that scans for arbitrage.
  • UGC Content Generator: Uses ElevenLabs and Gemini Veo via the LeanMCP AI Gateway to generate personalized AI-generated content.
Prizes
     
  • 1st Prize: $500 in LeanMCP credits (usable across OpenAI, Anthropic, ElevenLabs) + Internship and New Grad interviews
  •  
  • 2nd Prize: $250 in LeanMCP credits + Internship and New Grad interviews
  •  
  • 3rd Prize: $250 in LeanMCP credits + Internship and New Grad interviews
Sponsor involvement

LeanMCP mentors will be available online 24/7 throughout the hackathon via Discord.

Discord: https://discord.com/invite/DsRcA3GwPy

Mentors: lu_xian, dheerajpai, kushagra525

LeanMCP will also give a 1–2 minute presentation during the opening ceremony and provide up to three slides for the master presentation deck.

LeanMCP
[CLOSE]
Wispr Flow

Wispr Flow is a speech-to-text product focused on fast, accurate, and seamless voice input. Wispr enables users to convert speech into text efficiently, supporting real-time workflows across writing, coding, and communication.

Wispr Flow is used by developers, builders, and teams who want to move faster by speaking instead of typing, and is actively used in hackathons and technical projects.

challenge prompt

Participants are encouraged to build projects that incorporate speech-to-text technology using Wispr Flow.

This can include any creative or practical use of voice input, speech-to-text workflows, or applications that demonstrate how spoken language can improve speed, accessibility, or productivity in real-world software.

Project Requirements
Evaluation Criteria
Resources

Wispr provides the following resources to hackathon participants and organizers:

     
  • Three months free of Wispr Flow for all hackathon participants, available via the dedicated hackathon page:     https://wisprflow.ai/hackathon/nexhacks  
  •  
  • Direct access to the Wispr team via Slack for questions and support
  •  
  • Potential virtual keynote or talk by the CEO or CTO of Wispr
Examples
Prizes
  • Hackathon winners: 1 year free of Wispr Flow Pro, a unique Wispr key, and Wispr swag
  • All participants receive three months of free Wispr Flow
Sponsor involvement
Wispr Flow
[CLOSE]
ElevenLabs (MLH)

ElevenLabs is the most realistic voice AI platform, powering millions of developers, creators, and enterprises. From low-latency conversational agents to the leading AI voice generator for voiceovers and audiobooks, ElevenLabs enables natural, expressive, and scalable voice experiences.

challenge prompt

Build an innovative application that leverages ElevenLabs’ voice AI technology to solve a real problem or create a compelling user experience. Your project should demonstrate creative and effective use of our technology.

You can access all ElevenLabs products through either our website UI or API.

More information: https://www.mlh.com/events/nexhacks/prizes

Project Requirements
  • Integrate at least one ElevenLabs API.
  • Demonstrate a working end-to-end prototype with clear product integration.
  • Solve a real problem or create meaningful value for users.
Evaluation Criteria
  • Creativity & Innovation: How creatively does the project use ElevenLabs’ technology?
  • Technical Implementation: Quality of integration and use of ElevenLabs features.
  • Impact & Use Case: Does the project solve a real problem or create meaningful value?
  • Demo Quality: How clearly and effectively is the project demonstrated?
Resources
Examples
  • AI-powered accessibility tools using natural voice interfaces.
  • Multilingual podcast or video dubbing platforms.
  • Interactive voice agents for customer support, education, or healthcare.
  • Voice-enabled games or interactive storytelling experiences.
Prizes
  • 6 months of the Scale tier ($330/month value)
  • Wireless Earbuds (each team member)
  • Winning projects will be featured on the ElevenLabs Showcase
Sponsor involvement
  • No ElevenLabs team members will attend in person.
  • All technical support will be provided through Discord.
  • Judging will be conducted by NexHacks judges on behalf of ElevenLabs.
ElevenLabs (MLH)