amirza1993
Data Architect
Data Architect specialising in classification and reference data for web3 and entity‐centric workflows. I set the
product vision and roadmap for company classifications that enable sector and industry analysis, benchmarking, and risk views. I lead discovery and prioritisation, translate client and internal feedback into clear requirements with acceptance criteria. Also,I partner with Engineering and Data to deliver iterative, scalable improvements. I embed custom ETL data pipelines and publish data quality metrics constituting completeness, recency, accuracy, and availability to guide decisions and data adoption. My technical stack includes Python (Pandas, NumPy), PostgreSQL, and AWS (S3, Glue, Athena, CloudWatch). I use Git within Agile delivery to collaborate effectively with stakeholders and lead cross‐functional execution.
product vision and roadmap for company classifications that enable sector and industry analysis, benchmarking, and risk views. I lead discovery and prioritisation, translate client and internal feedback into clear requirements with acceptance criteria. Also,I partner with Engineering and Data to deliver iterative, scalable improvements. I embed custom ETL data pipelines and publish data quality metrics constituting completeness, recency, accuracy, and availability to guide decisions and data adoption. My technical stack includes Python (Pandas, NumPy), PostgreSQL, and AWS (S3, Glue, Athena, CloudWatch). I use Git within Agile delivery to collaborate effectively with stakeholders and lead cross‐functional execution.
Experience: 4 years
Yearly salary: $102,000
Hourly rate: $50
Nationality: 🇬🇧 United Kingdom
Residency: 🇬🇧 United Kingdom
Experience
Data Architect
Io Finnet 2022 - 2026
• Owned end‐to‐end strategy and backlog for internal classification, segmentation processes across compliance and operations defining product vision, roadmap, and success metrics. • Converted discovery insights into acceptance‐criteria requirements, culminating in 100% stakeholder adoption. • Built standardisation data lake pipelines in Python on AWS (S3, Glue, Athena) with PostgreSQL, cutting insight time by 40% enabling auditable outputs. • Defined and published data quality KPIs (data recency, accuracy and availability) with thresholds thus improving deployment stability by 50% via unit, integration tests and code reviews. • Established taxonomy governance (versioned datasets and change control) and ran stakeholder reviews to maintain consistency and clarity of data category definitions upholding SOC2 (Type II) certification. • Produced transparent reporting via Jupyter notebooks and AWS Quicksight dashboards, increasing operational efficiencies by 20%.
PhD Data Researcher
Scottish Government 2021 - 2022
• Consolidated 6 million records from 70 sources into reproducible modelling workflows, documented mappings and metadata to strengthen data lineage and auditability. • Co‐designed data classification schemes and standards with policy teams to enable consistent reporting, shortening analysis sprints by 27%. • Delivered data classification translating complex KPI workflows that increased dataset adoption by 18%. • Produced 3 research publications validating dataset demand and quality (methodology, QA), leading to programme‐wide standardisation.
Skills
big-data
crypto
data viz
data-science
postgres
python
english
arabic
chinese-mandarin
urdu