About the challenge
Hosted by the Department of Artificial Intelligence & Machine Learning at GGITS, NeuroLogic ’26 is an elite, high-intensity global virtual hackathon designed for the brightest university minds across the world.
As Artificial Intelligence continues to transform how we understand and interact with language, this hackathon challenges students, researchers, and developers to build the next generation of Natural Language Processing (NLP) solutions.
Push the boundaries of computational linguistics, innovate with cutting-edge AI techniques, and compete for prizes worth over $1,250 USD, including industry-standard tools and credits from leading platforms such as AWS and Oracle.
Get started
Event Format: 12-Hour Global Virtual Sprint
Date & Time: April 25, 2026 | 10:00 AM – 10:00 PM (IST)
Eligibility: Open to students worldwide — participate individually or in teams of up to 4 members.
What’s Next?Register for the hackathon today, start brainstorming your approach, and gear up for the ultimate NLP showdown.
Explore the Submission Requirements section to choose from a range of exciting NLP tracks and align your solution with real-world challenges in AI and computational linguistics.
Requirements
What to Build
To successfully complete the datathon, participants must develop and submit a project addressing one of the following NLP challenge tracks:
-
🚨 Challenge 1: Real-Time Disaster Tweet Classification (Beginner–Intermediate)
The Task:
Build a binary classification model that analyzes social media text and predicts whether a post refers to a real disaster event (1) or is unrelated, metaphorical, or non-disaster content (0), using the provided dataset.Why It Matters:
Rapid classification of social media content during emergencies enables first responders to prioritize critical situations and allocate resources effectively.Evaluation Metric:
- Macro F1-Score
The Task:
Develop a robust NLP classification model that processes the title and content of news articles and predicts whether the information is reliable (Real) or misleading (Fake) based on the provided dataset.Why It Matters:
Automated detection of misinformation is essential to combat the spread of false narratives and maintain trust in digital information ecosystems.Evaluation Metric:
- Overall Accuracy (%)
The Task:
Build a multi-label classification model capable of identifying multiple forms of toxicity (such as threats, obscenity, insults, and identity-based hate) across multilingual text data, using the provided dataset.Why It Matters:
Creating safer digital environments requires scalable moderation systems that can understand context, cultural nuances, and language diversity across global platforms.Evaluation Metric:
- Mean ROC-AUC Score
Prizes
Grand Champion
AWS Skill Builder Annual Subscription + Oracle Cloud $300 Credit Account. (Value: $749). The ultimate cloud infrastructure and training package to build, deploy, and scale advanced AI and NLP solutions.
Microsoft 365 Personal 1-Year Subscription. (Value: $70). A powerful productivity suite for research, documentation, and development workflows.
First Runner-Up
AWS Skill Builder Annual Subscription. (Value: $449). Gain access to industry-recognized training and certifications for machine learning engineers and Microsoft 365 Personal 1-Year Subscription. (Value: $70). A powerful productivity suite for research, documentation, and development workflows.
Second Runner-Up
Microsoft 365 Personal 1-Year Subscription. (Value: $70). A powerful productivity suite for research, documentation, and development workflows.
Devpost Achievements
Submitting to this hackathon could earn you:
Judges
Ravi Kumar Tummalapenta
Executive Director (JP Morgan Chase)
Advitya Gemawat
Machine Learning Engineer, Microsoft, Redmond, WA, USA
Dr. Sameer Yadav
Associate Professor, Gyan Ganga Institute of Technology and Sciences, Jabalpur
Dr. Sumit Nema
Associate Professor, Gyan Ganga Institute of Technology and Sciences, Jabalpur
Dr. Siddharth Bhalerao
Associate Professor, Gyan Ganga Institute of Technology and Sciences, Jabalpur
Judging Criteria
-
Model Performance & Accuracy
Evaluation will be based on quantitative performance metrics such as F1-Score, Accuracy, or ROC-AUC, depending on the selected challenge. Higher performance scores will receive higher marks. Participants must clearly report metrics in their GitHub README. -
Approach & Methodology
Evaluation of the model selection, implementation, data preprocessing, and feature engineering techniques. -
Innovation & Creativity
Use of unique ideas, improvements, advanced NLP techniques, or smart optimizations in the pipeline. -
Real-World Impact & Utility
The practical relevance, scalability, and usability of the solution for real-world scenarios. -
Documentation & Reproducibility
Clear explanation in the mandatory README, reproducible code/results, and inclusion of screenshots or visual outputs.
Questions? Email the hackathon manager
Invite others to compete
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
