9,918 Big Data Architect jobs in India
Big Data Architect
Posted today
Job Viewed
Job Description
Data Engineer
">">A key player in our organization is sought to design and develop scalable data pipelines and architectures for informed decision-making across the company.
">">The successful candidate will have a proven track record of designing secure data pipelines to process structured and unstructured data from various sources, collaborating with cross-functional teams to gather requirements, and developing database solutions to support application development efforts.
">">We are looking for someone who can architect and develop complex data workflows and processing for performance, ensuring high-quality, reliable, and governed data. Experience in writing stored procedures and query performance tuning on large datasets is essential.
">">A strong understanding of database management systems (SQL, NoSQL), data warehousing concepts, data modeling principles, and methodologies is required. Additionally, excellent analytical skills with attention to detail, hands-on experience with data transformation techniques, including data mapping, cleansing, and validation, are necessary.
">">This role requires the ability to work independently and manage multiple priorities in a fast-paced environment while delivering results that meet the highest standards.
">">- ">">
- Architect and develop secure data pipelines to process structured and unstructured data from various sources; ">">
- Collaborate with data scientists and stakeholders to understand data requirements; ">">
- Optimize data workflows and processing for performance, ensuring data quality, reliability, and governance; ">">
- Develop database solutions to support application development efforts; ">">
- Work closely with cross-functional teams to gather requirements and deliver high-quality results; ">">
Big Data Architect
Posted today
Job Viewed
Job Description
Data Infrastructure Specialist
We are seeking a skilled Data Infrastructure Specialist to support the development of our data infrastructure on Databricks. The ideal candidate will participate in technology selection, designing and building different components, integrating data from various sources, and managing big data pipelines for optimal performance.
Big Data Architect
Posted today
Job Viewed
Job Description
Unlock a Challenging Role as a Data Engineer in Azure Databricks
We are seeking an experienced and skilled data engineer to join our team. As a data engineer, you will be responsible for designing, developing, and maintaining data assets and data-related products by liaising with multiple stakeholders.
Key Responsibilities:
- Collaborate with stakeholders to understand data requirements and design, develop, and maintain complex ETL processes.
- Create data integration and diagram documentation.
- Lead data validation, UAT, and regression testing for new data asset creation.
- Develop and maintain data models, including schema design and optimization.
- Design and manage data pipelines that automate data flow, ensuring quality and consistency.
Required Skills and Qualifications:
- Strong knowledge of Python and Pyspark programming languages.
- Ability to write Pyspark scripts for developing data workflows.
- Proficiency in SQL, Hadoop, Hive, Azure, Databricks, and Greenplum technologies.
- Experience writing SQL queries to retrieve metadata and tables from various data management systems.
- Familiarity with big data technologies like Hadoop, Spark, and distributed computing frameworks.
- Experience using Hue and running Hive SQL queries, scheduling Apache Oozie jobs to automate data workflows.
- Effective communication and collaboration skills with stakeholders and business teams.
- Strong problem-solving and troubleshooting skills.
- Ability to establish comprehensive data quality test cases, procedures, and implement automated data validation processes.
- Relevant degree in Data Science, Statistics, Computer Science, or related fields.
- 4-7 years of experience as a data engineer.
- Proficiency in programming languages commonly used in data engineering, such as Python, Pyspark, and SQL.
- Experience in Azure cloud computing platform, including developing ETL processes using Azure Data Factory and big data processing with Azure Databricks.
Achieve Excellence in Data Engineering
This role requires a highly skilled and motivated individual who can work effectively in a fast-paced environment. If you have a passion for data engineering and a strong desire to succeed, we encourage you to apply.
Big Data Architect
Posted today
Job Viewed
Job Description
This is a high-level position that involves designing, developing, and maintaining ETL/ELT pipelines to process structured and unstructured data. The ideal candidate will have expertise in AWS services such as S3, Glue, Redshift, EMR, and Lambda.
Key Responsibilities:- Develop and optimize data lakes and data warehouses using AWS services.
- Build automation scripts and tools using Python for data integration, validation, and transformation.
- Collaborate with cross-functional teams to ensure data availability and reliability.
- Implement best practices for data governance, security, and monitoring.
- Optimize data pipelines for performance and cost efficiency.
- Troubleshoot and resolve data-related issues in production environments.
The successful candidate will have strong knowledge of data engineering principles and experience with AWS services. They will also be able to work effectively in a team environment and communicate complex technical ideas to non-technical stakeholders.
Requirements:- Strong understanding of data engineering concepts and principles.
- Experience with AWS services such as S3, Glue, Redshift, EMR, and Lambda.
- Proficiency in Python programming language.
- Ability to collaborate with cross-functional teams.
- Strong communication and problem-solving skills.
Big Data Architect
Posted today
Job Viewed
Job Description
Spearhead a team of big data engineers to develop architectures that meet client and team requirements.
Lead the implementation of standardized Data Model, single view of customer in the Data Lake on Cloud.
Develop data pipelines for new sources, data transformations within the Data Lake , implementing graphql , work on no sql database, CI/CD and data delivery as per business needs.
Build pipelines to bring in wide variety of data from multiple sources within the organization as well as from social media and public data sources.
Collaborate with cross functional teams to source data and make it available for downstream consumption.
Work with the team to provide an effective solution design to meet business needs.
Evaluate dependencies and challenges, escalating critical issues to the Sponsor and/or Head of Data Engineering.
Promote effective communication with key stakeholders and coordinate communications plans for initiative execution and delivery.
Big Data Architect
Posted today
Job Viewed
Job Description
Job Title: Senior Data Engineer
">We are seeking a skilled and experienced Senior Data Engineer to lead our big data engineering team. As a key member of our organization, you will work closely with clients and team members to understand requirements and develop architectures that meet their needs.
">Key Responsibilities:
">- ">
- Provide technical leadership and guidance to the team, supporting the Data Engineering team in setting up the Data Lake on Cloud. ">
- Develop data pipelines for new sources, data transformations within the Data Lake, implementing GraphQL, and working on NoSQL databases. ">
- Collaborate with cross-functional teams to source data and make it available for downstream consumption. ">
Requirements:
">We are looking for a team player with excellent problem analysis skills, experience with Azure Databricks, Apache Spark, and Hive query language, streaming data pipeline using Structured Streaming or Apache Flink, and knowledge of NoSQL databases and Big Data ETL processing tools.
">What We Offer:
">As a Senior Data Engineer, you will have the opportunity to work on challenging projects, collaborate with a talented team, and contribute to the growth and success of our organization.
Big Data Architect
Posted today
Job Viewed
Job Description
As a seasoned Data Engineer with our client in India, you'll play a key role in expanding their teams. Your primary responsibility will be to design, develop, and maintain data pipelines and ETL workflows using Databricks and Azure Data Factory.
- Main Responsibilities:
- Designing efficient data models and Delta Lake architectures for analytics and AI applications.
- Developing and optimizing Python-based data processing scripts (PySpark, pandas, APIs).
- Managing and tuning SQL and Postgres databases for schema design, indexing, and query performance.
- Collaborating with data scientists and architects on Generative AI applications leveraging Azure OpenAI or similar platforms.
- Implementing CI/CD pipelines and Git-based workflows for continuous data deployment and version control.
Required Skills and Qualifications:
- Strong programming skills in Python (PySpark, pandas, REST APIs).
- Proven expertise in Databricks (workflows, notebooks, Delta Lake).
- Advanced SQL knowledge for data modeling and optimization.
- Experience with Azure Data Factory, Azure Data Lake Gen2, and Azure Synapse Analytics.
- Proficiency in Postgres (schema design, indexing, tuning).
- Familiarity with Vector Databases (pgvector, Qdrant, Pinecone, etc.) and Generative AI concepts.
Key Performance Indicators:
- Delivery of high-quality data pipelines and ETL workflows.
- Successful implementation of CI/CD pipelines and Git-based workflows.
- Effective collaboration with cross-functional teams.
Be The First To Know
About the latest Big data architect Jobs in India !
Big Data Architect
Posted today
Job Viewed
Job Description
Job Role:
Data Engineering encompasses a wide range of tasks, from designing and developing data pipelines to ensuring data security and compliance.
Big Data Architect
Posted today
Job Viewed
Job Description
**Job Overview**
As a senior data engineer, you will lead a team of big data engineers and collaborate closely with clients to develop architectures that meet their needs.
You will provide technical leadership and guidance to your team, supporting the data engineering team in setting up the data lake on cloud and implementing standardized data models.
Key responsibilities include developing data pipelines for new sources, data transformations within the data lake, implementing GraphQL, working with NoSQL databases, CI/CD, and data delivery as per business requirements.
**Required Skills and Qualifications**
- Excellent problem analysis skills
- Good experience with Azure Databricks platform
- At least one Cloud Infra provider (Azure/AWS)
- Building data pipelines using batch processing with Apache Spark or Hive query language
- Streaming data pipeline using Apache Spark Structured Streaming or Apache Flink on Kafka & Delta Lake
- Knowledge of NoSQL databases, Big data ETL processing tools, Data modelling and Data mapping
- Experience with Hive and Hadoop file formats
- Basic scripting knowledge
- Experience of working with multiple data sources and basic understanding of CI CD tools and DevOps practices
**Benefits**
This role requires a strong ability to communicate effectively, work collaboratively in a team environment, and adapt to changing priorities.
**Others**
The ideal candidate should be able to debug, fine tune, and optimize large-scale data processing jobs.
Big Data Architect
Posted today
Job Viewed
Job Description
Data engineers play a pivotal role in driving business outcomes through data assets. Their responsibilities encompass collaborating with stakeholders to craft efficient ETL processes, ensuring seamless data integration and validation.
Key skills include Python, Pyspark, SQL, Hadoop, Azure Databricks, Greenplum
Qualifications:
- Strong analytical capabilities
- Effective communication
- 4-7 years of experience in Data Engineer
A degree in a related field or an equivalent combination of education and experience is required for this position.