About Us:
At Parkar, we stand at the intersection of innovation and technology, revolutionizing software development with our cutting-edge Low Code Application Platform, Vector.ai. For almost a decade, our expertise has expanded to four countries, offering a full range of software development services, including product management, full-stack engineering, DevOps, test automation, and data analytics.
Vector.ai, our pioneering Low Code Application Platform, redefines software development by integrating over 500 modular code components. It covers UI/UX, front-end and back-end engineering, and analytics for a streamlined, efficient path to digital transformation through standardized software development and AIOps.
Our commitment to innovation has earned the trust of over 100 clients, from large enterprises to small and medium-sized businesses. We proudly serve key sectors like Fintech, Healthcare-Life Sciences, Retail-eCommerce, and Manufacturing, delivering tailored solutions for success and growth.
At Parkar, we don't just develop software; we build partnerships and pave the way for a future where technology empowers businesses to achieve their full potential.
For more info., Visit our website: https://parkar.in
Role Overview:
As a Data Architect, you will be responsible for designing, implementing, and maintaining the organization's data architecture. You will collaborate with cross-functional teams to understand business needs, develop data models, ensure data security and governance, and optimize data infrastructure for performance and scalability.
Responsibilities:
- Design, develop, and maintain conceptual, logical, and physical data models.
- Collaborate with stakeholders to understand data requirements and translate them into efficient database structures.
- Experience with Databricks, a unified analytics platform based on Apache Spark, for efficient big data processing and analytics.
- Develop strategies for data acquisition, storage, and retrieval that align with business objectives.
- Implement and manage data governance policies and procedures to ensure data integrity and security.
- Evaluate and select appropriate database technologies and tools to support business needs.
- Design and optimize ETL processes for data integration and transformation.
- Monitor and optimize database performance, ensuring scalability and reliability.
- Create and maintain documentation related to data architecture, data flows, and technical specifications.
- Stay updated with industry trends and emerging technologies to recommend innovative solutions.
Requirements:
- Minimum of 8-10 years of experience in data architecture, with a strong focus on MS Fabric, Databricks, PySpark, SQL, and Power BI.
- Proficiency in MS Fabric and Databricks for data processing and analytics.
- Strong experience with PySpark and SQL for data transformation and integration.
- Advanced skills in Power BI for data visualization and reporting.
- Familiarity with data modeling and ETL processes.
- Experience with Azure Synapse Analytics and other cloud-based data platforms.
- In-depth knowledge of data governance principles and best practices.
- Strong understanding of data security measures and compliance requirements.
- Strong expertise in database design, data modeling, and data warehousing concepts.
- Relevant certifications such as DP-600 and DP-700 are highly desirable.
- Excellent problem-solving skills, attention to detail, and the ability to work collaboratively in a team environment.