40 Data Engineering jobs in Delhi

Data Engineering Role

Delhi, Delhi 100x.inc

Posted today

Job Viewed

Tap Again To Close

Job Description

Minimum Requirements:

- At least 3 years of professional experience in Data Engineering
- Demonstrated end-to-end ownership of ETL pipelines
- Deep, hands-on experience with AWS services: EC2, Athena, Lambda, and Step Functions (non-negotiable)
- Strong proficiency in MySQL (non-negotiable)
- Working knowledge of Docker: setup, deployment, and troubleshooting

Highly Preferred Skills:

- Experience with orchestration tools such as Airflow or similar
- Hands-on with PySpark
- Familiarity with the Python data ecosystem: SQLAlchemy, DuckDB, PyArrow, Pandas, NumPy
- Exposure to DLT (Data Load Tool)

Ideal Candidate Profile:

The role demands a builder’s mindset over a maintainer’s. Independent contributors with clear, efficient communication thrive here. Those who excel tend to embrace fast-paced startup environments, take true ownership, and are motivated by impact—not just lines of code. Candidates are expected to include the phrase Red Panda in their application to confirm they’ve read this section in full.

Key Responsibilities:

- Architect, build, and optimize scalable data pipelines and workflows
- Manage AWS resources end-to-end: from configuration to optimization and debugging
- Work closely with product and engineering to enable high-velocity business impact
- Automate and scale data processes—manual workflows are not part of the culture
- Build foundational data systems that drive critical business decisions

Compensation range: ₹8.4–12 LPA (fixed base), excluding equity, performance bonus, and revenue share components.
This advertiser has chosen not to accept applicants from your region.

Data Engineering Role

Delhi, Delhi 100x.inc

Posted today

Job Viewed

Tap Again To Close

Job Description

Minimum Requirements:

  • At least 3 years of professional experience in Data Engineering
  • Demonstrated end-to-end ownership of ETL pipelines
  • Deep, hands-on experience with AWS services: EC2, Athena, Lambda, and Step Functions (non-negotiable)
  • Strong proficiency in MySQL (non-negotiable)
  • Working knowledge of Docker: setup, deployment, and troubleshooting


Highly Preferred Skills:

  • Experience with orchestration tools such as Airflow or similar
  • Hands-on with PySpark
  • Familiarity with the Python data ecosystem: SQLAlchemy, DuckDB, PyArrow, Pandas, NumPy
  • Exposure to DLT (Data Load Tool)


Ideal Candidate Profile:

The role demands a builder’s mindset over a maintainer’s. Independent contributors with clear, efficient communication thrive here. Those who excel tend to embrace fast-paced startup environments, take true ownership, and are motivated by impact—not just lines of code. Candidates are expected to include the phrase Red Panda in their application to confirm they’ve read this section in full.


Key Responsibilities:

  • Architect, build, and optimize scalable data pipelines and workflows
  • Manage AWS resources end-to-end: from configuration to optimization and debugging
  • Work closely with product and engineering to enable high-velocity business impact
  • Automate and scale data processes—manual workflows are not part of the culture
  • Build foundational data systems that drive critical business decisions


Compensation range: ₹8.4–12 LPA (fixed base), excluding equity, performance bonus, and revenue share components.

This advertiser has chosen not to accept applicants from your region.

Data Engineering Azure databricks

Delhi, Delhi EXL

Posted today

Job Viewed

Tap Again To Close

Job Description

Data Engineer (DE) Consultant is responsible for designing, developing, and maintaining data assets and data related products by liaising with multiple stakeholders.

Responsibilities:

- Work with stakeholders to understand the data requirements to design, develop, and maintain complex ETL processes.
- Create the data integration and data diagram documentation.
- Lead the data validation, UAT and regression test for new data asset creation.
- Create and maintain data models, including schema design and optimization.
- Create and manage data pipelines that automate the flow of data, ensuring data quality and consistency.

Qualifications and Skills:

- Strong knowledge on Python and Pyspark
- Expectation is to have ability to write Pyspark scripts for developing data workflows.
- Strong knowledge on SQL, Hadoop, Hive, Azure, Databricks and Greenplum
- Expectation is to write SQL to query metadata and tables from different data management system such as, Oracle, Hive, Databricks and Greenplum.
- Familiarity with big data technologies like Hadoop, Spark, and distributed computing frameworks.
- Expectation is to use Hue and run Hive SQL queries, schedule Apache Oozie jobs to automate the data workflows.
- Good working experience of communicating with the stakeholders and collaborate effectively with the business team for data testing.
- Expectation is to have strong problem-solving and troubleshooting skills.
- Expectation is to establish comprehensive data quality test cases, procedures and implement automated data validation processes.
- Degree in Data Science, Statistics, Computer Science or other related fields or an equivalent combination of education and experience.
- 4-7 years of experience in Data Engineer.
- Proficiency in programming languages commonly used in data engineering, such as Python, Pyspark, SQL.
- Experience in Azure cloud computing platform, such as developing ETL processes using Azure Data Factory, big data processing and analytics with Azure Databricks.
- Strong communication, problem solving and analytical skills with the ability to do time management and multi-tasking with attention to detail and accuracy.
This advertiser has chosen not to accept applicants from your region.

Data Engineering Azure databricks

Delhi, Delhi EXL

Posted today

Job Viewed

Tap Again To Close

Job Description

Data Engineer (DE) Consultant is responsible for designing, developing, and maintaining data assets and data related products by liaising with multiple stakeholders.

Responsibilities:

  • Work with stakeholders to understand the data requirements to design, develop, and maintain complex ETL processes.
  • Create the data integration and data diagram documentation.
  • Lead the data validation, UAT and regression test for new data asset creation.
  • Create and maintain data models, including schema design and optimization.
  • Create and manage data pipelines that automate the flow of data, ensuring data quality and consistency.

Qualifications and Skills:

  • Strong knowledge on Python and Pyspark
  • Expectation is to have ability to write Pyspark scripts for developing data workflows.
  • Strong knowledge on SQL, Hadoop, Hive, Azure, Databricks and Greenplum
  • Expectation is to write SQL to query metadata and tables from different data management system such as, Oracle, Hive, Databricks and Greenplum.
  • Familiarity with big data technologies like Hadoop, Spark, and distributed computing frameworks.
  • Expectation is to use Hue and run Hive SQL queries, schedule Apache Oozie jobs to automate the data workflows.
  • Good working experience of communicating with the stakeholders and collaborate effectively with the business team for data testing.
  • Expectation is to have strong problem-solving and troubleshooting skills.
  • Expectation is to establish comprehensive data quality test cases, procedures and implement automated data validation processes.
  • Degree in Data Science, Statistics, Computer Science or other related fields or an equivalent combination of education and experience.
  • 4-7 years of experience in Data Engineer.
  • Proficiency in programming languages commonly used in data engineering, such as Python, Pyspark, SQL.
  • Experience in Azure cloud computing platform, such as developing ETL processes using Azure Data Factory, big data processing and analytics with Azure Databricks.
  • Strong communication, problem solving and analytical skills with the ability to do time management and multi-tasking with attention to detail and accuracy.
This advertiser has chosen not to accept applicants from your region.

Big Data Developer

Delhi, Delhi BayOne Solutions

Posted today

Job Viewed

Tap Again To Close

Job Description

Job Description:

We are seeking a highly skilled Full Stack Big Data Engineer to join our team. The ideal candidate will have strong expertise in big data technologies, cloud platforms, microservices, and system design, with the ability to build scalable and efficient data-driven applications. This role requires hands-on experience across data engineering, backend development, and cloud deployment, along with a strong foundation in modern DevOps and monitoring practices.

Key Responsibilities:

- Design, build, and optimize big data pipelines using Scala, PySpark, Spark SQL, Spark Streaming, and Databricks.
- Develop and maintain real-time data processing solutions using Kafka Streams or similar event-driven platforms.
- Implement cloud-based solutions on Azure, leveraging services such as Azure Data Factory (ADF) and Azure Functions.
- Build scalable microservices with Core Java (8+) and Spring Boot.
- Collaborate on system design, including API development and event-driven architecture.
- Contribute to front-end development (JavaScript, React) as needed.
- Ensure application reliability through monitoring tools such as Grafana, New Relic, or similar.
- Utilize modern CI/CD tools (Git, Jenkins, Kubernetes, ArgoCD, etc.) for deployment and version control.
- Work cross-functionally with data engineers, software developers, and architects to deliver high-quality solutions.

Qualifications:

- 5+ years of professional experience as a Software/Data Engineer or Full Stack Engineer.
- Strong programming skills in Scala, Python, and Java.
- Experience with Databricks, Spark SQL, Spark Streaming, and PySpark.
- Hands-on experience with Azure cloud services and data engineering tools.
- Solid knowledge of microservices development with Spring Boot.
- Familiarity with event-driven platforms such as Kafka.
- Experience with CI/CD pipelines and containerization/orchestration tools.
- Strong problem-solving and communication skills.
- Bachelor’s or master’s degree in computer science, Engineering, or a related field (preferred).

Nice to Have:

- Experience with API design and event-driven architecture.
- Frontend development experience with React and JavaScript.
This advertiser has chosen not to accept applicants from your region.

Big Data Engineer

Delhi, Delhi Alef Education

Posted today

Job Viewed

Tap Again To Close

Job Description

Who we are

Alef Education began with a bold idea: that every learner deserves a personalised and meaningful education experience. What started in 2016 as a small pilot programme in Abu Dhabi has evolved into one of the world’s most dynamic EdTech companies—reshaping how millions of students engage with learning across the globe.

Today, Alef is proudly headquartered in the UAE, working hand-in-hand with ministries of education, schools, and teachers to bring smart, data-powered platforms into classrooms in over 14,000 schools.

Supporting over 1.1 million students and 50,000 teachers across the UAE, Indonesia & Morocco our AI-driven platforms generate 16+ million data points every day, helping drive smarter learning decisions. Whether it’s improving national exam results, boosting classroom engagement, or supporting educators with world-class tools, Alef is committed to impact at scale.

In 2024, Alef made history as the first EdTech company to list on the Abu Dhabi Securities Exchange (ADX), cementing our role as a regional innovator with global reach.

About The Role

As an ALEF Big Data Engineer you will have a strong understanding of big data technologies with an exceptional ability to code. You will provide technical leadership, working closely with the wider team to ensure high quality code is delivered in line with the project goals and delivery cycles. You will work closely with other teams to deliver rapid prototypes as well as production code for which you will ensure high accessibility standards are upheld. We expect familiarity with modern frameworks and languages, as well as working practices such as Clean Code, TDD, BDD, continuous integration, continuous delivery, and DevOps.

Key Responsibilities

Defining and developing services and solutions

- Define, design, and develop services and solutions around large data ingestion, storage, and management such as withRDBMS, No SQL DBs, Log Files, Events.
- Define, design, and run robust data pipelines/batch jobs in a production environment.
- Architecting highly scalable, highly concurrent, and low latency systems

Maintain, support, and enhance current systems.

- Contribute to paying down technical debt and use development approaches that minimize the growth of new technical debt.
- Contribute feedback to improve the quality, readability, and testability of the code base within your team.
- Mentor and train other developers in a non-line management capacity.
- Build tools (One of SBT, Gradle, Maven).
- Ensure all software built is robust and scalable.

Collaborating with Internal and external stakeholders

- Participating in sprint planning to work with developers and project teams to ensure projects are deployable and monitorable from the outside.
- Work with third-party and other internal providers to support a variety of integrations.
- As part of the team, you may be expected to participate in some of the 2nd line in-house support and Out-of-Hours support rotas.
- Proactively advise on best practices.

To Be The Right Fit, You'll Need

- Degree in Computer Science, Software Engineering or related preferred
- Minimum of 5 years experience in a Big Data
- Follow Clean Code/Solid principles
- Adhere and use TDD/BDD.
- Outstanding ability to develop efficient, readable, highly optimized/maintainable and clear code.
- Highly Proficient in either Functional Java or Scala, Python
- Knowledge of Azure Big Data/Analytics services – ADLS (Azure Data Lake Storage), HDInsight, Azure Data Factory, Azure Synapse Analytics, Azure Fabric, Azure Event Hubs, Azure Stream Analytics, Azure Databricks
- Experience of Storing Data in systems such as Hadoop HDFS, ADLS, Event Hubs
- Experience of designing, setting up and running big data tech stacks such as Hadoop, Azure Databricks, Spark and distributed datastores such as Cassandra, DocumentDBs, MongoDB, Event Hubs
- In-depth knowledge of Hadoop technology ecosystem – HDFS, Spark, Hive, HBase, Event Hubs, Flume, Sqoop, Oozie, SPARK, Avro, Parquet
- Experience debugging a complex multi-server service.
- In depth knowledge and experience in IaaS/PaaS solutions (eg AWS Infrastructure hosting and managed services)
- Familiarity with network protocols - TCP/IP, HTTP, SSL, etc.
- Knowledge of relational and non-relational database systems
- Understanding continuous integration and delivery.
- Mocking (any of the following Mockito, ScalaTest Spock, Jasmine, Mocha).
- IDE Intellij or Eclipse.
- Build tools (One of SBT, Gradle, Maven).
- Ensure all software built is robust and scalable.
- An ability to communicate technical concepts to a non-technical audience.
- Working knowledge of unix-like operating systems such as Linux and/or Mac OS X.
- Knowledge of the git version control system.
- Ability to quickly research and learn new programming tools and techniques.
This advertiser has chosen not to accept applicants from your region.

Big Data Developer

Delhi, Delhi BayOne Solutions

Posted today

Job Viewed

Tap Again To Close

Job Description

Job Description:

We are seeking a highly skilled Full Stack Big Data Engineer to join our team. The ideal candidate will have strong expertise in big data technologies, cloud platforms, microservices, and system design, with the ability to build scalable and efficient data-driven applications. This role requires hands-on experience across data engineering, backend development, and cloud deployment, along with a strong foundation in modern DevOps and monitoring practices.


Key Responsibilities:

  • Design, build, and optimize big data pipelines using Scala, PySpark, Spark SQL, Spark Streaming, and Databricks.
  • Develop and maintain real-time data processing solutions using Kafka Streams or similar event-driven platforms.
  • Implement cloud-based solutions on Azure, leveraging services such as Azure Data Factory (ADF) and Azure Functions.
  • Build scalable microservices with Core Java (8+) and Spring Boot.
  • Collaborate on system design , including API development and event-driven architecture.
  • Contribute to front-end development (JavaScript, React) as needed.
  • Ensure application reliability through monitoring tools such as Grafana, New Relic, or similar.
  • Utilize modern CI/CD tools (Git, Jenkins, Kubernetes, ArgoCD, etc.) for deployment and version control.
  • Work cross-functionally with data engineers, software developers, and architects to deliver high-quality solutions.


Qualifications:

  • 5+ years of professional experience as a Software/Data Engineer or Full Stack Engineer.
  • Strong programming skills in Scala, Python, and Java .
  • Experience with Databricks, Spark SQL, Spark Streaming , and PySpark .
  • Hands-on experience with Azure cloud services and data engineering tools.
  • Solid knowledge of microservices development with Spring Boot.
  • Familiarity with event-driven platforms such as Kafka.
  • Experience with CI/CD pipelines and containerization/orchestration tools.
  • Strong problem-solving and communication skills.
  • Bachelor’s or master’s degree in computer science, Engineering, or a related field (preferred).


Nice to Have:

  • Experience with API design and event-driven architecture .
  • Frontend development experience with React and JavaScript .
This advertiser has chosen not to accept applicants from your region.
Be The First To Know

About the latest Data engineering Jobs in Delhi !

Hiring for Java + Big Data

Delhi, Delhi Deloitte

Posted today

Job Viewed

Tap Again To Close

Job Description

Deloitte India is hiring for Java + Big Data (Big data is mandatory) candidates for Chennai location. Interested candidates can apply.

Looking for 6 to 9 Years of experience candidates.
Job description

Role -
Key Responsibilities:

Big Data Technologies - Hadoop, Apache Spark, Apache Kafka, Flink (Good to have)
Develop and maintain scalable backend services using Java 8/17+ and Spring Boot.
Design and implement RESTful APIs and manage full request/response lifecycle.
Apply Java 8+ features (Streams, Lambdas, Optional, etc.) for efficient coding practices.
Work with SQL (Oracle) and NoSQL databases (e.g., MongoDB, Cassandra) for data storage and retrieval.
Optimize database queries using Oracle SQL Hints and other performance techniques.
Collaborate with cross-functional teams to deliver high-quality software solutions.
Participate in code reviews, technical discussions, and Agile ceremonies.
Explore and integrate event-based frameworks (e.g., Kafka, RabbitMQ) for asynchronous processing.
Contribute to the design and implementation of Big Data pipelines where applicable.

How you’ll grow

Connect for impact
Our exceptional team of professionals across the globe are solving some of the world’s most complex business problems, as well as directly supporting our communities, the planet, and each other. Know more in our

Global Impact Report

and our

India Impact Report .

Empower to lead
You can be a leader irrespective of your career level. Our colleagues are characterized by their ability to inspire, support, and provide opportunities for people to deliver their best and grow both as professionals and human beings. Know more about

Deloitte and our One Young World partnership.

Inclusion for all

At Deloitte, people are valued and respected for who they are and are trusted to add value to their clients, teams and communities in a way that reflects their own unique capabilities.

Know more about everyday steps that you can take to be more inclusive.

At Deloitte, we believe in the unique skills, attitude and potential each and every one of us brings to the table to make an impact that matters.

Drive your career

At Deloitte, you are encouraged to take ownership of your career. We recognize there is no one size fits all career path, and global, cross-business mobility and up / re-skilling are all within the range of possibilities to shape a unique and fulfilling career. Know more about

Life at Deloitte.
Everyone’s welcome… entrust your happiness to us, Our workspaces and initiatives are geared towards your 360-degree happiness. This includes specific needs you may have in terms of accessibility, flexibility, safety and security, and caregiving. Here’s a glimpse of things that are in store for you.

Interview tips
We want job seekers exploring opportunities at Deloitte to feel prepared, confident and comfortable. To help you with your interview, we suggest that you do your research, know some background about the organization and the business area you’re applying to.

Check out recruiting tips from Deloitte professionals.
This advertiser has chosen not to accept applicants from your region.

Hiring for Java + Big Data

Delhi, Delhi Deloitte

Posted today

Job Viewed

Tap Again To Close

Job Description

Deloitte India is hiring for Java + Big Data (Big data is mandatory) candidates for Chennai location. Interested candidates can apply.


Looking for 6 to 9 Years of experience candidates.

Job description


Role -

Key Responsibilities:


  • Big Data Technologies - Hadoop, Apache Spark, Apache Kafka, Flink (Good to have)
  • Develop and maintain scalable backend services using Java 8/17+ and Spring Boot.
  • Design and implement RESTful APIs and manage full request/response lifecycle.
  • Apply Java 8+ features (Streams, Lambdas, Optional, etc.) for efficient coding practices.
  • Work with SQL (Oracle) and NoSQL databases (e.g., MongoDB, Cassandra) for data storage and retrieval.
  • Optimize database queries using Oracle SQL Hints and other performance techniques.
  • Collaborate with cross-functional teams to deliver high-quality software solutions.
  • Participate in code reviews, technical discussions, and Agile ceremonies.
  • Explore and integrate event-based frameworks (e.g., Kafka, RabbitMQ) for asynchronous processing.
  • Contribute to the design and implementation of Big Data pipelines where applicable.


How you’ll grow


Connect for impact

Our exceptional team of professionals across the globe are solving some of the world’s most complex business problems, as well as directly supporting our communities, the planet, and each other. Know more in our Global Impact Report and our India Impact Report.


Empower to lead

You can be a leader irrespective of your career level. Our colleagues are characterized by their ability to inspire, support, and provide opportunities for people to deliver their best and grow both as professionals and human beings. Know more about Deloitte and our One Young World partnership.


Inclusion for all


At Deloitte, people are valued and respected for who they are and are trusted to add value to their clients, teams and communities in a way that reflects their own unique capabilities. Know more about everyday steps that you can take to be more inclusive. At Deloitte, we believe in the unique skills, attitude and potential each and every one of us brings to the table to make an impact that matters.


Drive your career


At Deloitte, you are encouraged to take ownership of your career. We recognize there is no one size fits all career path, and global, cross-business mobility and up / re-skilling are all within the range of possibilities to shape a unique and fulfilling career. Know more about Life at Deloitte.

Everyone’s welcome… entrust your happiness to us, Our workspaces and initiatives are geared towards your 360-degree happiness. This includes specific needs you may have in terms of accessibility, flexibility, safety and security, and caregiving. Here’s a glimpse of things that are in store for you.


Interview tips

We want job seekers exploring opportunities at Deloitte to feel prepared, confident and comfortable. To help you with your interview, we suggest that you do your research, know some background about the organization and the business area you’re applying to. Check out recruiting tips from Deloitte professionals.

This advertiser has chosen not to accept applicants from your region.

Engineering Manager - Data Platform

Delhi, Delhi Coinbase

Posted today

Job Viewed

Tap Again To Close

Job Description

Ready to be pushed beyond what you think you’re capable of?


At Coinbase, our mission is to increase economic freedom in the world. It’s a massive, ambitious opportunity that demands the best of us, every day, as we build the emerging onchain platform — and with it, the future global financial system.


To achieve our mission, we’re seeking a very specific candidate. We want someone who is passionate about our mission and who believes in the power of crypto and blockchain technology to update the financial system. We want someone who is eager to leave their mark on the world, who relishes the pressure and privilege of working with high caliber colleagues, and who actively seeks feedback to keep leveling up. We want someone who will run towards, not away from, solving the company’s hardest problems.


Our work culture is intense and isn’t for everyone. But if you want to build the future alongside others who excel in their disciplines and expect the same from you, there’s no better place to be.

While many roles at Coinbase are remote-first, we are not remote-only. In-person participation is required throughout the year. Team and company-wide offsites are held multiple times annually to foster collaboration, connection, and alignment. Attendance is expected and fully supported.


The Data Platform & Service team builds and operates systems to centralize all of Coinbase's internal and third-party data, making it easy for teams across the company to access, process, and transform that data for analytics, machine learning, and powering end-user experiences. As an engineering manager on the team you will contribute to the full spectrum of our systems, from managing foundational processing and data storage, to building and maintaining scalable pipelines, to developing frameworks, tools, and internal applications to make that data easily and efficiently available to other teams and systems.


What you’ll be doing (ie. job duties)

  • Develop and execute the multi-year strategy for Data Platform in collaboration with cross-functional partners, including Data Engineering, Data Science and Infrastructure orgs.
  • Mentor and guide your team members to make meaningful contributions to the organization while fostering their professional growth and career development.
  • Provide technical leadership by supporting your team in making sound architectural decisions and ensuring engineering quality through adherence to SLAs
  • Partner with engineers, designers, product managers, and senior leadership to translate the vision into actionable quarterly roadmaps.
  • Collaborate with the talent team to recruit and hire exceptional engineers who will enhance Coinbase's culture and product offerings.
  • Foster a positive and inclusive team environment aligned with Coinbase's culture, ensuring all team members feel valued and supported.


What we look for in you (ie. job requirements):

  • At least 10 plus years of experience in software engineering.
  • At least 2 plus years of engineering management experience.
  • At least 2 plus year of experience in Data Streaming, Data Warehousing and Data Governance areas
  • You possess a strong understanding of what constitutes high-quality code and effective software engineering processes, creating an environment that fosters these principles.
  • An execution-focused mindset, capable of navigating through ambiguity and delivering results.
  • An ability to balance long-term strategic thinking with short-term planning.
  • Experience in creating, delivering, and operating multi-tenanted, distributed systems at scale.
  • You can be hands-on when needed – whether that’s writing/reviewing code or technical documents, participating in on-call rotations and leading incidents, or triaging/troubleshooting bugs.


Nice to haves:

  • Hands-on experience with Kafka/Pulsar and Spark/Flink
  • Experience with core AWS services and concepts (S3, IAM, autoscaling groups, RDS), and DevOps experience
  • Experience with platforms like Snowflake/Databricks
  • Crypto-forward experience, including familiarity with onchain activity


Job #: GPEM06IN


Please be advised that each candidate may submit a maximum of four applications within any 30-day period. We encourage you to carefully evaluate how your skills and interests align with Coinbase's roles before applying.


Commitment to Equal Opportunity

Coinbase is proud to be an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, creed, gender, national origin, age, disability, veteran status, sex, gender expression or identity, sexual orientation or any other basis protected by applicable law. Coinbase will also consider for employment qualified applicants with criminal histories in a manner consistent with applicable federal, state and local law. For US applicants, you may view the Employee Rights and the Know Your Rights notices by clicking on their corresponding links. Additionally, Coinbase participates in the E-Verify program in certain locations, as required by law.


Coinbase is also committed to providing reasonable accommodations to individuals with disabilities. If you need a reasonable accommodation because of a disability for any part of the employment process, please contact us at accommodations(at)coinbase.com to let us know the nature of your request and your contact information. For quick access to screen reading technology compatible with this site click here to download a free compatible screen reader (free step by step tutorial can be found here).


Global Data Privacy Notice for Job Candidates and Applicants

Depending on your location, the General Data Protection Regulation (GDPR) and California Consumer Privacy Act (CCPA) may regulate the way we manage the data of job applicants. Our full notice outlining how data will be processed as part of the application procedure for applicable locations is available here. By submitting your application, you are agreeing to our use and processing of your data as required. For US applicants only, by submitting your application you are agreeing to arbitration of disputes as outlined here.


AI Disclosure

For select roles, Coinbase is piloting an AI tool based on machine learning technologies to conduct initial screening interviews to qualified applicants. The tool simulates realistic interview scenarios and engages in dynamic conversation. A human recruiter will review your interview responses, provided in the form of a voice recording and/or transcript, to assess them against the qualifications and characteristics outlined in the job description.


For select roles, Coinbase is also piloting an AI interview intelligence platform to transcribe and summarize interview notes, allowing our interviewers to fully focus on you as the candidate.


The above pilots are for testing purposes and Coinbase will not use AI to make decisions impacting employment . To request a reasonable accommodation due to disability, please contact accommodations(at)coinbase.com

This advertiser has chosen not to accept applicants from your region.
 

Nearby Locations

Other Jobs Near Me

Industry

  1. request_quote Accounting
  2. work Administrative
  3. eco Agriculture Forestry
  4. smart_toy AI & Emerging Technologies
  5. school Apprenticeships & Trainee
  6. apartment Architecture
  7. palette Arts & Entertainment
  8. directions_car Automotive
  9. flight_takeoff Aviation
  10. account_balance Banking & Finance
  11. local_florist Beauty & Wellness
  12. restaurant Catering
  13. volunteer_activism Charity & Voluntary
  14. science Chemical Engineering
  15. child_friendly Childcare
  16. foundation Civil Engineering
  17. clean_hands Cleaning & Sanitation
  18. diversity_3 Community & Social Care
  19. construction Construction
  20. brush Creative & Digital
  21. currency_bitcoin Crypto & Blockchain
  22. support_agent Customer Service & Helpdesk
  23. medical_services Dental
  24. medical_services Driving & Transport
  25. medical_services E Commerce & Social Media
  26. school Education & Teaching
  27. electrical_services Electrical Engineering
  28. bolt Energy
  29. local_mall Fmcg
  30. gavel Government & Non Profit
  31. emoji_events Graduate
  32. health_and_safety Healthcare
  33. beach_access Hospitality & Tourism
  34. groups Human Resources
  35. precision_manufacturing Industrial Engineering
  36. security Information Security
  37. handyman Installation & Maintenance
  38. policy Insurance
  39. code IT & Software
  40. gavel Legal
  41. sports_soccer Leisure & Sports
  42. inventory_2 Logistics & Warehousing
  43. supervisor_account Management
  44. supervisor_account Management Consultancy
  45. supervisor_account Manufacturing & Production
  46. campaign Marketing
  47. build Mechanical Engineering
  48. perm_media Media & PR
  49. local_hospital Medical
  50. local_hospital Military & Public Safety
  51. local_hospital Mining
  52. medical_services Nursing
  53. local_gas_station Oil & Gas
  54. biotech Pharmaceutical
  55. checklist_rtl Project Management
  56. shopping_bag Purchasing
  57. home_work Real Estate
  58. person_search Recruitment Consultancy
  59. store Retail
  60. point_of_sale Sales
  61. science Scientific Research & Development
  62. wifi Telecoms
  63. psychology Therapy
  64. pets Veterinary
View All Data Engineering Jobs View All Jobs in Delhi