Location: Hybrid (Based in Central Ohio)
Purpose of the Data Architect
A Data Architect at RevLocal is expected to champion our approach to data
addresses our strategic needs to provide a foundation for our products, reduce
friction for our operations, and make company objectives big and visible. The
ideal candidate will have a strong knowledge of data architecture patterns, an
ability to implement these patterns through mentoring and directly pairing
with the engineering team and possess the capability to envision and
articulate the future state of RevLocal's data strategy. As an informal
servant-leader in the company, this individual will use their experience to
grow other team members to be proficient in data-centric approaches through
pairing, mentoring and more structured education (workshops, lunch and
learns, etc.). This position it is expected to architect scalable and stable
solutions that address the needs of the day and are flexible enough to support
the needs of the future.
Data Architect Responsibilities:
Develop and execute a data strategy that aligns to RevLocal's business
objectives and allows for continuous product innovation.
Work directly with infrastructure and software engineers to implement
solutions. The ability to effectively teach and coach are essential.
Hands-on development skills are required (no Ivory Towers, please)
Assess and prioritize data needs across the enterprise and create
appropriate fit-for-use solutions.
Implement data security standards for developing new solutions and for
maintaining existing solutions.
Data Architect Requirements:
Experience in creating strategies for enterprise data, from concept
through implementation, that meet the needs for Product, Operations and
Experience developing and implementing an Enterprise Data Strategy,
5+ years' experience in multiple data development techniques
Relational DB design and development, SQL and data normalization.
NoSQL design, development and maintenance.
Medallion architecture as it relates to analysis systems.
Data warehouse, Data Lake and Lakehouse architectures / technologies, including experience in design, development and maintenance.
Demonstrated hands-on experience in Software Development, including:
C#, Python, R, DAX, and PowerShell
Experience working with Apache Spark and Parquet
Working knowledge of MS Azure Tools and Technologies including:
Azure Data Factories, Logic Apps, and Azure Functions
Azure Blob Storage
Working knowledge of AWS tools and technologies including:
Exposure to Machine Learning methodologies is desired.
Bachelor's degree in Computer Science, Engineering, or related field.
Strong business acumen and the ability to connect technology needs to
Excellent technical, diagnostic, and troubleshooting skills.
Demonstrated leadership and organizational abilities.
Willingness to build professional, collaborative relationships with
technical team members and business stakeholders.
Willingness to work in a hybrid work environment, emphasizing in-person
for situations requiring high-bandwidth collaboration (brainstorming,
discovery, mentoring, coaching, etc.).
Excellent communication, motivational, coaching, and interpersonal skills.