The PAN-BS AI-ML Challenge
Join us in an exhilarating exploration at the frontier of artificial intelligence and machine learning!
Why this?
Here’s why you should be part of this groundbreaking contest: you'll gain experience tackling the nuances in datasets, solving real-life problem statements, showcasing your ML and NLP skills, and collaborating with peers.
Challenging Tracks 🌟
Fake News Detection in Dravidian Languages
In an age of information overload, accurately categorizing fake news is crucial for fostering reliable communication. This task explores the effectiveness of NLP in understanding Dravidian languages, which are less widely spoken.
Learn MoreAI-Generated Text Detection in Articles
With the rapid advancement of AI, distinguishing between human-written and AI-generated content is increasingly challenging. This challenge aims to explore the capabilities of ML models to accurately identify the origin of textual content, contributing to the development of robust techniques for detecting AI-generated text.
Learn MoreSchedule

Aug 04: Release of Train Data
The training datasets for both the tasks will be released along with the detailed problem statements, milestones and information on evaluation metrics. Teams can start exploring the dataset, build & train suitable models.
Note: No validation set will be provided. Hint: Teams can split the train set to make their own validation data.
Aug 07: Release of Unlabelled Test Set
The test set will be provided to the teams, who will then use their developed models to generate predictions.
Note: The ground truth labels won't be released.

Aug 08–09: Submission of Runs and Code
Each team can submit a max. of 3 run files (i.e. submissions containing the model predicted outcomes). These can be results from different models or same model with different hyperparameters. Evaluation metrics like F1 Score, Accuracy will be used to assess the model performance.

Aug 11: Declaration of Winners & Presentation Session
Scores will be displayed on the leaderboard on 10th. The top teams will deliver a brief system presentation and share their insights via a virtual meet on 11th, judged by Piyush Sir.
Still Have Questions?
Feel free to contact.
#1 I'm from Wayanad, but I'd like to team up with my friend who's from Gir.
Of course, it's perfectly fine for a team to include members from different houses.
#2 I registered as an individual. Now I want to team up. What should I do?
Please re-submit the registration form with the updated team member details. Either of you can submit. Our automated scripts will detect such cases and both of your previous responses will be discarded after processing. You will be issued one team ID.
#3 Will the organizers create the teams?
With around 700 IITM BS students present in our LogicLooM and ML Challenge groups (combined), we encourage you to form teams through discussions and by sharing your skills with each other. Collaboration is an important aspect of these events, so we highly recommend working together if possible.
However, if you're unable to find a team-mate, register 'solo'. Once registration closes and the data is processed, we will pair solo registrants into teams according to the following priority rules:
- Batch Similarity: We will prioritize pairing students from the same or adjacent batches.
- Track Preference: We will ensure that team members are assigned to the same track they selected during registration.
All participants will receive a confirmation auto-mail from 'no-reply wayanad' regarding the team allocation details and team ID.
#4 What if I want to participate individually?
We understand that some participants may prefer to compete individually. If you possess the necessary skills (proficiency in Python and ML/NLP), please complete the Request Form. Note that this form is for requesting individual participation and is not the registration form; you should have already submitted your registration form beforehand.
If you meet the criteria outlined in the Request Form, your request will be approved, and you will be excluded from the team allocation process.
#5 Can we add a 3rd member to an already registered 2-member team?
You can't edit the previous response. Please re-submit the registration form freshly with the updated team member details. Last response will be considered.
#6 Can we change our track after registration?
Yes, re-submit the registration form freshly with the updated track choice. Any changes are possible until the deadline.
#7 I'm a fresher and new to ML. Should I participate?
You need not worry if you're just starting out and don't have a technical or coding background. We’re providing you with easy-to-understand resources and a detailed roadmap to help you get familiar with Python first.
Our goal isn’t for you to master these intricate aspects right away, but rather to gain a solid understanding of the basics and how ML works. This will be a fantastic hands-on opportunity & learning experience. Give it a try. Learn, join our sessions and enjoy the challenge :)
#8 Can a team participate in both the tasks?
No, one team can only opt for one track.
#9 I want to switch teams. What should I do?
A participant can only be a part of ONE team. Please submit or ask one of your new team-mates to re-submit the registration form with the updated team details.
You must inform your previous team members regarding your decision to leave their team as one of them needs to re-submit the registration form with their new structure (excluding you). Else, all participants from such teams will be desk-rejected from participating in the challenge.
#10 We submitted the same team details twice. Is that a problem?
Don't worry. We have an automated process to detect such cases. Irrespective of the multiple responses, you both will be allotted the same team ID.
#11 Are pre-trained fine-tuned models allowed?
Yes.
#12 Are there any restrictions on libraries or models?
Yes, there are specific restrictions for each track. You'll get complete clarity on 5th August 2024 when we release the detailed problem statements with the task-specific rules and resources.
#13 Will there be any sessions to clarify doubts?
Yes, there will be a discussion session with registered teams on 6th August to address any doubts regarding the challenge. Additionally, two more mentor assistance sessions will be hosted, which should help the teams during the training and development phase.
Rules & Guidelines
Participants can join as individuals or in teams of up to 3 students. All IITM DS and ES students are welcome, regardless of their level. A student must be part of only one team.
To participate, you must register for the challenge here before the deadline. A unique Team ID will be sent to the PoCs via email.
Student mentors will be available during the contest (model training & validation phase) to assist you if needed.
FL students with basic Python proficiency are encouraged to join us! The focus should be the learning experience. Resources and video content will be provided.
Participants should regularly check their email for updates about the contest and follow the Schedule for more information.
Certificates
- Participation certificate: Provided to all team members who submit a solution & complete the challenge.
- Honourable Mention*: Given to teams ranked 4th–10th on the leaderboard after the contest ends.
- Certificate of Appreciation*: Awarded to the top 3 teams.
- Certificate of Achievement*: Awarded to teams that surpass the benchmark scores.
*Certificates will be signed by the Head, Student Affairs.
Evaluation Policy
Contest Leaderboard (General Team Rankings)
Each team’s submissions are scored using a tuple (F1 score, Accuracy) as (F, A). Rankings prioritize F1 score, followed by Accuracy if there's a tie. A team’s best score from all submitted runs is considered.
If multiple teams have the same (F, A), the team with fewer participants is ranked higher. Identical sizes and scores will result in a shared rank.
Top Team Rankings After System Presentation
Top 10 teams (subject to change) will present their systems. Presentations will be judged on:
- Novelty: Originality and use of innovative methods.
- Technical Accuracy: Correctness of implementation and methods.
- Clarity and Organization: Logical and coherent flow of the presentation.
- Model Performance: Effectiveness, robustness, and consistency.
- Visual & Aesthetic Quality: Design and clarity of presentation slides.
- Engagement & Delivery: Presenter confidence and audience engagement.
The team with the highest combined rubric score will be declared the winner.
Code of Conduct
- Evaluations will be automated via the app-portal. Deviations from templates may affect grading.
- Code submission via the GForm is mandatory — portal scores without code will be discarded.
- Unfair practices will result in disqualification. The organizing team’s decision is final.
- By participating, you agree to all Terms and Conditions. Participant email data will only be used by the organizers. Rules may change and events may be postponed/cancelled under exceptional circumstances. No grievances will be entertained.
Contact Us
For questions, email the Organizers at wayanad-ml@ds.study.iitm.ac.in
Guests

Mr. Piyush Wairale
Judge
Ms. Kothai SK
GuestVideo Resources
- All
- Edition 1
- Edition 2
PAN-BS AI-ML Challenge Finale (Winner announcement & System Showcase)
Comprehensive Approach to the Challenge Problem Statements
Orientation Session - ML Challenge 2.0 (Saavan edition)
PAN-BS AI-ML Challenge Orientation (Open Session)
ML Challenge 2.0 Model Discussion & presentation
Top Leaderboard
Track 2 – Qualifying Teams
Team ID | Participants | macF1 | Accuracy |
---|---|---|---|
- (From 2.0 re-run) | Meikanda Sivam Sivakumar | 0.979 | - |
T22104 | Sai Ruthvik, Shankha Subhra Saha | 0.963 | 0.986 |
T22082 | Parashmani Datta, Athish Sivakumaran | 0.956 | 0.983 |
T12173 | Darshan Kumar | 0.951 | 0.983 |
- (From 2.0 re-run) | Krish Gupta | 0.948 | - |
- (From 2.0 re-run) | Lakshya Patel, Soham Katlariwala | 0.944 | - |
T32019 | Saminathan C, Vanchit Visanth M S, Sahishnuram S | 0.922 | 0.969 |
T22121 | Siddharth Roy, Sarfraz Ahmed | 0.892 | 0.954 |
T22094 | Gaurav Singh, Sakshi | 0.863 | 0.950 |
T22128 | Ayaan Qureshi, Nikhil Maurya | 0.861 | 0.938 |
T12002 | Shiva Kumar | 0.833 | 0.917 |
T22074 | Rohit Satheesh, Mohit Kumar | 0.790 | 0.888 |
T22103 | Saravanan K, Stuti Bahuguna | 0.783 | 0.886 |
T22029 | Arka Dash, Nimish Shinde | 0.713 | 0.820 |
T32007 | Manaswita Mandal, Debapriyo Saha | 0.697 | 0.805 |
Track 3 (MLC 2.0) – Top Teams
Participants | Acc. |
---|---|
Karthik Agrawal | 0.95 |
Ripunjay Kumar, Aakashdeep Srivastava, Harsh Singh/td> | 0.86 |
Serah Santiago, Priyanshu Sharma | 0.82 |
Track 1 – Qualifying Teams
Team ID | Participants | macF1 | Accuracy |
---|---|---|---|
T11017 | Kartik Agrawal | 0.419 | 0.765 |
- (From 2.0 re-run) | JS Karthik | 0.418 | - |
T21154 | Keshari Nath Chaudhary, Pratiksha Naik | 0.405 | 0.871 |
T21166 | Shaikh Gufran Jabbar, Sukanya S | 0.400 | 0.841 |
T11006 | Nitish Rishi | 0.364 | 0.803 |
T21141 | Aniket Dash, Deva Vasista | 0.336 | 0.886 |
T21020 | Sanyam Mittal, Nithish Kumar | 0.318 | 0.879 |