FAQs
What is the duration of the Research Intern role?
The Research Intern position typically lasts for 12 weeks.
What qualifications are required to apply for this internship?
Candidates must be currently enrolled or accepted in a PhD program in Computer Science, Software Engineering, Electrical Engineering, or a related STEM field, and have at least 1 year of experience in conducting research and authoring peer-reviewed publications.
Are there specific work locations for this internship?
Yes, Research Interns are expected to be physically located in their manager's Microsoft worksite location for the duration of their internship.
What type of research will interns be involved in?
Interns will conduct both fundamental and applied research in areas such as large language models, multimodal AI technologies, and intelligent service monitoring, collaborating closely with researchers and scientists.
Are reference letters required for the application?
Yes, applicants need to submit a minimum of two reference letters along with their application.
What types of experiences are preferred for applicants?
Preferred candidates should have experience with machine learning, natural language processing, and/or multimodal AI technologies, along with a record of publications in leading venues like NeurIPS, ICML, or CVPR.
Is there a possibility for benefits or additional compensation during the internship?
Yes, certain roles may be eligible for benefits and other types of compensation.
What is the base pay range for this internship?
The base pay range for this internship is USD $6,550 - $12,880 per month, with a different range for specific locations like the San Francisco Bay area and New York City metropolitan area.
How does Microsoft view diversity and inclusion in hiring?
Microsoft is an equal opportunity employer and considers all qualified applicants without regard to various characteristics protected by law.
What recent research challenges is the GenAI team addressing?
Some exciting research challenges include adapting large language models for new domains, creating multimodal agent solutions, and leveraging models like Generative Pre-Trained Transformers for debugging and program synthesis.

