Objective
Enable interns to learn and build practical AI solutions: data preprocessing, model building (classical ML & deep learning), evaluation, and deployment basics. Tasks are chosen to build transferable skills relevant to Alfido Tech projects.
Features
- Hands-on ML & Deep Learning model development
- Data preprocessing, feature engineering and evaluation
- Model interpretability and reporting
- Basic model deployment (Flask/FastAPI / simple container)
- Responsible AI considerations (bias, fairness, data privacy)
Tools & Libraries
Beginner Level Tasks
- Install Python and set up a virtual environment; run a Jupyter notebook.
- Load a CSV dataset and perform basic cleaning (missing values, type conversions).
- Explore data with summary statistics and 4 basic plots (histogram, boxplot, scatter, bar).
- Implement a simple linear regression with scikit-learn and report MAE/RMSE.
Note: Out of the 4 main tasks below, you are required to complete any 3 tasks.
Tasks (4)
Goal
Build and evaluate a supervised classification model (e.g., predict churn, spam detection, or image classification small dataset).
Requirements
- Data preprocessing, train/test split, cross-validation
- Compare at least two algorithms (e.g., logistic regression, random forest)
- Report metrics: accuracy, precision, recall, F1, ROC-AUC
Deliverables
- Notebook with code, plots, and metrics
- Short report summarizing model selection and results
Goal
Implement a small deep learning model using PyTorch or TensorFlow — image classifier (e.g., CIFAR subset) or text classifier (e.g., sentiment analysis).
Requirements
- Use pretrained models / transfer learning where applicable
- Data augmentation for images or tokenization for text
- Plot training curves and provide evaluation metrics
Deliverables
- Training notebook or script with results
- Saved model file and instructions to run inference
Goal
Wrap a trained model in a simple API using Flask or FastAPI and containerize it with Docker for deployment.
Requirements
- Create endpoint(s) for model inference
- Provide Dockerfile and instructions to run locally
- Include example request & response
Deliverables
- GitHub repo with API code, Dockerfile, and sample requests
- Short demo (screenshot or curl example)
Goal
Analyze model fairness, bias and explainability for one of your models using techniques like SHAP/LIME and propose mitigation.
Requirements
- Compute feature importances and use SHAP or LIME for local explanations
- Check for bias across key sensitive groups (if dataset has such attributes)
- Propose practical mitigation steps
Deliverables
- Notebook showing interpretation plots and analysis
- Short write-up describing bias checks and mitigation recommendations
How to Submit Your Tasks
-
For each task:
- Create a separate document (DOC, DOCX or PDF) for each task containing code links, notebooks, screenshots, and explanation.
- Include a README with environment setup and exact commands to run notebooks/scripts.
-
Upload artifacts:
- Push code & notebooks to GitHub and share repository links.
- Upload large files (models, datasets) to Google Drive and share links if needed.
-
Submit links:
- Go to the Task Submission page.
- Paste your task links clearly mentioning task numbers.
Tip: Reproducibility matters — include exact package versions and commands in your README.