Résumé
Dipankar's resume.
Basics
Name | Dipankar Srirag |
Label | Researcher |
d.srirag@unsw.edu.au | |
Phone | +61 452253636 |
Url | https://dipankarsrirag.github.io/ |
Summary | A Master of Information Technology graduate from the University of New South Wales, with strong background in Natural Language Processing (NLP). |
Work
-
2024.05 - Present Research Assistant
University of New South Wales
Build a benchmark for sentiment and sarcasm classification for under-represented dialects of English.
- Develop better data sampling strategies to make the existing datasets more challenging for LLMs.
- Design and execute experiments using all classes of LLMs, which involve in-context learning and parameter-efficient fine-tuning.
-
2024.05 - Present Casual Academic
University of New South Wales
Lead tutorials for the course–COMP9444: Deep Learning and Neural Networks.
- Mentor student groups through their end-to-end artificial intelligence projects.
- Achieve a broad agreement of 95% from feedback provided by 20 students.
Education
-
2024.09 - 2022.09 Sydney, Australia
Masters
University of New South Wales, Sydney
Information Technology
- Principles of Programming
- Data Science and Engineering
- Big Data Management
- Data Analytics for Graphs
- Computer Vision
Publications
-
2024.09.05 Predicting the Target Word of Game-playing Conversations using a Low-Rank Dialect Adapter for Decoder Model
arXiv
Propose a novel adapter-based architecture to make pre-trained decoder models (Mistral and Gemma) robust to other dialects of English.
-
2024.05.08 Evaluating Dialect Robustness of Language Models via Conversation Understanding
arXiv
Evaluate open-source (Llama, Mistral, and Gemma) and closed-source (GPT-3.5 and 4 Turbo) models on masked target word prediction of conversations from dialogue games.
Skills
Languages
English | |
C2 |
Telugu | |
Native |
References
Projects
- 2023 - Present
Comparative Analysis of Abstractive Summarisation Techniques
Evaluate the effectiveness of static and context-aware embeddings in dialogue summarisation, using custom Seq2Seq, BART-base, and FLAN-T5 models.
- Large Language Models
- Dialogue Understanding