- Hackathon Date: April 25, 2026
- Submission Window: 10:00 AM β 10:00 PM (IST)
All submissions must be completed and uploaded on Devpost before the deadline. Late submissions will not be considered.
π Eligibility- The hackathon is open to students worldwide enrolled in a recognized educational institution.
- Participants may compete:
- Individually, or
- In teams of up to 4 members
- Cross-institutional and international teams are encouraged.
- Participants must register on Devpost to be eligible.
- All projects must be developed during the hackathon window (12 hours).
- Use of pre-trained models, libraries, and open-source tools is allowed.
- Participants must use the official datasets provided by the organizers.
Each submission must include:
- A clear project description
- A public GitHub repository with code and README
- Reported evaluation metrics (F1 Score / Accuracy / ROC-AUC as applicable)
- Screenshots or outputs demonstrating results
Submissions must align with one of the official NLP challenge tracks.
π Prizes π° Total Prize Pool: Over $1,250 USDπ₯ Grand Champion (1st Place)
AWS Skill Builder Annual Subscription + Oracle Cloud $300 Credit Account
Value: ~$749 USD + Microsoft 365 Personal (1-Year Subscription) Value: ~$70 USD
AWS Skill Builder Annual Subscription Value: ~$449 USD + Microsoft 365 Personal (1-Year Subscription) Value: ~$70 USD
π₯ Second Runner-Up (3rd Place)
Microsoft 365 Personal (1-Year Subscription)
Value: ~$70 USD
A powerful productivity suite for research, documentation, and development workflows.
CERTIFICATES
All registered participants who make a valid submission will receive a Certificate of Participation. Top 3 winners in each challenge track will receive a Certificate of Achievement
βοΈ Judging Criteria & Winner Selection
A panel of faculty and technical experts from the Department of Artificial Intelligence & Machine Learning, GGITS will evaluate all submissions.
Note: As there is no live presentation, evaluation will be based primarily on reported results, GitHub repositories, and documented outputs.
π§ Evaluation Parameters (100 Points)
πΉ Model Performance & Accuracy β 40% β (Primary Factor)
- Challenge 1 β Macro F1-Score
- Challenge 2 β Accuracy
- Challenge 3 β ROC-AUC
π Higher performance = higher score
πΉ Approach & Methodology β 20%
- Model selection and implementation
- Data preprocessing and feature engineering
πΉ Innovation & Creativity β 15%
- Unique ideas or improvements
- Advanced techniques or optimizations
πΉ Real-World Impact & Utility β 15%
- Practical relevance of the solution
- Scalability and usability
πΉ Documentation & Reproducibility β 10%
- Clear README and explanation
- Reproducible results
- Screenshots / outputs
π Winner Selection
- Scores will be aggregated across all judges.
- The top three highest-scoring submissions, irrespective of challenge track, will be awarded:
- π₯ 1st Place
- π₯ 2nd Place
- π₯ 3rd Place
- Winners will be officially announced within 72 hours after the datathon concludes.
Important Rules
- Participants must clearly mention:
- Evaluation method (train-test split / validation).
- Dataset usage.
- Reported metrics must be verifiable.
- Any form of plagiarism or false reporting will lead to disqualification.
