This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Amazon internships across all seasons are full-time positions, and interns should expect to work in office, Monday-Friday, up to 40 hours per week typically between 8am-5pm. Specific team norms around working hours will be communicated by your manager. Interns should not have conflicts such as classes or other employment during the Amazon work-day. Applicants should have a minimum of one quarter/semester/trimester remaining in their studies after their internship concludes. By applying to this position, your application will be considered for all Data Engineer roles at all locations we hire for in the United States including but not limited to: Greater Seattle Area (Seattle, Bellevue, Redmond), Greater Bay Area (San Francisco, Sunnyvale, Santa Clara), Greater DMV (DC, MD, VA), Austin (TX), New York City (NY), Minneapolis (MN). You will be able to provide your preference of location and start date during the application process but, we cannot guarantee that we can meet your selection based on several factors including but not limited to the availability and business needs of this role. Finalization on the location and start dates available will be provided to you at the time of job offer. Start dates for our internships in this posting include dates in August or September 2026.
Job Responsibility:
Design, implement, and automate deployment of our distributed system for collecting and processing log events from multiple sources
Design data schema and operate internal data warehouses and SQL/NoSQL database systems
Own the design, development, and maintenance of ongoing metrics, reports, analyses, and dashboards that engineers, analysts, and data scientists use to drive key business decisions
Monitor and troubleshoot operational or data issues in the data pipelines
Drive architectural plans and implementation for future data storage, reporting, and analytic solutions
Develop code based automated data pipelines able to process millions of data points
Improve database and data warehouse performance by tuning inefficient queries
Work collaboratively with Business Analysts, Data Scientists, and other internal partners to identify opportunities/problems
Provide assistance with troubleshooting, researching the root cause, and thoroughly resolving defects in the event of a problem
Requirements:
Are 18 years of age or older
Work 40 hours/week minimum and commit to 12 week internship maximum
Are enrolled in a academic program that is physically located in the United States
Experience with data transformation
Experience with database, data warehouse or data lake solutions
Experience with SQL
Experience with one or more scripting language (e.g., Python, KornShell, Scala)
Currently enrolled in or will receive a Bachelor’s Degree, Master's Degree, or advanced technical degree in Computer Science, Computer Engineering, Information Management, Information Systems, or an equivalent technical discipline with an expected conferral date between October 2026 – December 2029.
Nice to have:
Experience with AWS
Experience building data pipelines or automated ETL processes
Knowledge of writing and optimizing SQL queries in a business environment with large-scale, complex datasets
Experience with big data processing technology (e.g., Hadoop or ApacheSpark), data warehouse technical architecture, infrastructure components, ETL, and reporting/analytic tools and environments
Experience with data visualization software (e.g., AWS QuickSight or Tableau) or open-source project
Experience from previous technical internship(s) or demonstrated project experience
Knowledge of basics of designing and implementing a data schema like normalization, relational model vs dimensional model