Rules & Guidelines πŸ“… Dates
  • Hackathon Date: April 25, 2026
  • Submission Window: 10:00 AM – 10:00 PM (IST)

All submissions must be completed and uploaded on Devpost before the deadline. Late submissions will not be considered.

πŸŽ“ Eligibility
  • The hackathon is open to students worldwide enrolled in a recognized educational institution.
  • Participants may compete:
    • Individually, or
    • In teams of up to 4 members
  • Cross-institutional and international teams are encouraged.
  • Participants must register on Devpost to be eligible.
πŸ’» Project & Submission Requirements
  • All projects must be developed during the hackathon window (12 hours).
  • Use of pre-trained models, libraries, and open-source tools is allowed.
  • Participants must use the official datasets provided by the organizers.

Each submission must include:

  • A clear project description
  • A public GitHub repository with code and README
  • Reported evaluation metrics (F1 Score / Accuracy / ROC-AUC as applicable)
  • Screenshots or outputs demonstrating results

 Submissions must align with one of the official NLP challenge tracks.

πŸ† Prizes πŸ’° Total Prize Pool: Over $1,250 USD

 

πŸ₯‡ Grand Champion (1st Place)

AWS Skill Builder Annual Subscription + Oracle Cloud $300 Credit Account
Value: ~$749 USD + Microsoft 365 Personal (1-Year Subscription) Value: ~$70 USD

πŸ₯ˆ First Runner-Up (2nd Place)

AWS Skill Builder Annual Subscription Value: ~$449 USD + Microsoft 365 Personal (1-Year Subscription) Value: ~$70 USD

πŸ₯‰ Second Runner-Up (3rd Place)

Microsoft 365 Personal (1-Year Subscription)
Value: ~$70 USD

A powerful productivity suite for research, documentation, and development workflows.

CERTIFICATES

All registered participants who make a valid submission will receive a Certificate of Participation. Top 3 winners in each challenge track will receive a Certificate of Achievement

βš–οΈ Judging Criteria & Winner Selection

A panel of faculty and technical experts from the Department of Artificial Intelligence & Machine Learning, GGITS will evaluate all submissions.

Note: As there is no live presentation, evaluation will be based primarily on reported results, GitHub repositories, and documented outputs.

🧠 Evaluation Parameters (100 Points)

πŸ”Ή Model Performance & Accuracy β€” 40% βœ… (Primary Factor)

  • Challenge 1 β†’ Macro F1-Score
  • Challenge 2 β†’ Accuracy
  • Challenge 3 β†’ ROC-AUC

πŸ‘‰ Higher performance = higher score

πŸ”Ή Approach & Methodology β€” 20%

  • Model selection and implementation
  • Data preprocessing and feature engineering

πŸ”Ή Innovation & Creativity β€” 15%

  • Unique ideas or improvements
  • Advanced techniques or optimizations

πŸ”Ή Real-World Impact & Utility β€” 15%

  • Practical relevance of the solution
  • Scalability and usability

πŸ”Ή Documentation & Reproducibility β€” 10%

  • Clear README and explanation
  • Reproducible results
  • Screenshots / outputs

πŸ† Winner Selection

  • Scores will be aggregated across all judges.
  • The top three highest-scoring submissions, irrespective of challenge track, will be awarded:
    • πŸ₯‡ 1st Place
    • πŸ₯ˆ 2nd Place
    • πŸ₯‰ 3rd Place
  • Winners will be officially announced within 72 hours after the datathon concludes.

 Important Rules

  • Participants must clearly mention:
    • Evaluation method (train-test split / validation).
    • Dataset usage.
  • Reported metrics must be verifiable.
  • Any form of plagiarism or false reporting will lead to disqualification.