Bangalore, KA, IN
1 day ago
BI/Reporting Engineer
Position:BI/Reporting Engineer

Job Description:

What we're looking for:

•  Overall 10 years of experience in Industry including 6+Years of experience as developer using Databricks/Spark Ecosystems.

•  Hands on experience on Unified Data Analytics with Databricks, Databricks Workspace User Interface, Managing Databricks Notebooks, Delta Lake with Python, Delta Lake with Spark SQL.

•  Good understanding of Spark Architecture with Databricks, Structured Streaming. Setting Microsoft Azure with Databricks, Databricks Workspace for Business Analytics, Manage Clusters In Databricks, Managing the Machine Learning Lifecycle

• Hands on experience Data extraction(extract, Schemas, corrupt record handling and parallelized code), transformations and loads (user - defined functions, join optimizations) and Production (optimize and automate Extract, Transform and Load)

TECHNICAL SKILLS

Spark Data Frame APIPython for Data ScienceSpark ProgrammingSQL for Data AnalysisSimplify Data analysis With PythonManage Clusters DatabricksDatabrick AdministrationData Extraction and Transformation and LoadImplementing Partitioning and Programming with MapReduceSetting up Azure Databricks AccountLinux Command

What you'll be doing:

Experience in developing Spark applications using Spark-SQL in Databricks for data extraction, transformation, and aggregation from multiple file formats for Analyzing & transforming the data to uncover insights into the customer usage patterns.Extract Transform and Load data from sources Systems to Azure Data Storage services using a combination of Azure Data factory, T-SQL, Spark SQL, and U-SQL Azure Data Lake Analytics. Data ingestion to one or more Azure services (Azure Data Lake, Azure Storage, Azure SQL, Azure DW) and processing the data in Azure DatabricksDevelop Spark applications using pyspark and spark SQL for data extraction, transformation, and aggregation from multiple file formats for analyzing and transforming the data uncover insight into the customer usage patternsHands on experience on developing SQL Scripts for automationResponsible for estimating the cluster size, monitoring, and troubleshooting of the Spark databricks clusterAbility to apply the spark DataFrame API to complete Data manipulation within spark sessionGood understanding of Spark Architecture including spark core, spark SQL, DataFrame, Spark streaming, Driver Node, Worker Node, Stages, Executors and Tasks, Deployment modes, the Execution hierarchy, fault tolerance, and collectionCollaborate with delivery leadership to deliver projects on time adhering to the quality standardsContribute to the growth of the Microsoft Azure practice by helping with solutioning for prospectsProblem-solving skills along with good interpersonal & communication skillsSelf-starter who can pick up any other relevant Azure Services in the Analytics space

Location:IN-KA-Bangalore, India (SKAV Seethalakshmi) GESC

Time Type:Full time

Job Category:Information Technology
Por favor confirme su dirección de correo electrónico: Send Email