About Us:

At Parkar, we stand at the intersection of innovation and technology, revolutionizing software development with our cutting-edge Low Code Application Platform, Vector.ai. For almost a decade, our expertise has expanded to four countries, offering a full range of software development services, including product management, full-stack engineering, DevOps, test automation, and data analytics.

Vector.ai, our pioneering Low Code Application Platform, redefines software development by integrating over 500 modular code components. It covers UI/UX, front-end and back-end engineering, and analytics for a streamlined, efficient path to digital transformation through standardized software development and AIOps.

Our commitment to innovation has earned the trust of over 100 clients, from large enterprises to small and medium-sized businesses. We proudly serve key sectors like Fintech, Healthcare-Life Sciences, Retail-eCommerce, and Manufacturing, delivering tailored solutions for success and growth.

At Parkar, we don't just develop software; we build partnerships and pave the way for a future where technology empowers businesses to achieve their full potential.

For more info., Visit our website: https://parkar.in

Role Overview:

As a Data Architect, you will be responsible for designing, implementing, and maintaining the organization's data architecture. You will collaborate with cross-functional teams to understand business needs, develop data models, ensure data security and governance, and optimize data infrastructure for performance and scalability.

Responsibilities:

    • Lead the design, development, and deployment of robust and scalable data pipelines across raw, curated, and consumer layers.
    • Collaborate with cross-functional teams to gather data requirements and translate them into technical solutions.
    • Leverage Databricks (Apache Spark) and PySpark for large-scale data processing and real-time analytics.
    • Implement solutions using Microsoft Fabric, ensuring seamless integration, performance optimization, and centralized governance.
    • Design and manage ETL/ELT processes using Azure Data Factory (ADF), Synapse Analytics, and Delta Lake on Azure Data Lake Storage (ADLS).
    • Drive implementation of data quality checks, error handling, and monitoring for data pipelines.
    • Work with SQL-based and NoSQL-based systems to support diverse data ingestion and transformation needs.
    • Guide junior engineers through code reviews, mentoring, and enforcing development best practices.
    • Support data governance and compliance efforts, ensuring high data quality, security, and lineage tracking.
    • Create and maintain detailed technical documentation, data flow diagrams, and reusable frameworks.
    • Stay current with emerging data engineering tools and trends to continuously improve infrastructure and processes.

Requirements:

    • 8–10 years of experience in Data Engineering, with a focus on Azure Cloud, Databricks, and Microsoft Fabric.
    • Proficiency in PySpark, Spark SQL, and ADF for building enterprise-grade data solutions.
    • Strong hands-on experience with SQL and experience managing data in Delta Lake (parquet) format.
    • Expertise in Power BI for developing insightful dashboards and supporting self-service analytics.
    • Solid understanding of data modeling, data warehousing, and ETL/ELT frameworks.
    • Experience working with Azure Synapse Analytics, MS SQL Server, and other cloud-native services.
    • Familiarity with data governance, data lineage, and security best practices in the cloud.
    • Demonstrated ability to lead engineering efforts, mentor team members, and drive delivery in Agile environments.
    • Relevant certifications such as DP-203, DP-600, or DP-700 are a strong plus.
    • Strong problem-solving abilities, excellent communication skills, and a passion for building high-quality data products.