Building trustworthy data pipelines because AI cannot learn from dirty data
How to unit test PySpark
Recently, I came across an interesting problem: how to speed up the feedback loop while maintaining a PySpark DAG. Of course, I could just run the Spark Job and look...
24 Feb 2020
How to speed up a PySpark job
I had a Spark job that occasionally was running extremely slow. On a typical day, Spark needed around one hour to finish it, but sometimes it required over four hours....
17 Feb 2020
How does MapReduce work, and how is it similar to Apache Spark?
In this article, I am going to explain the original MapReduce paper “MapReduce: Simplified Data Processing on Large Clusters,” published in 2004 by Jeffrey Dean and Sanjay Ghemawat.
10 Feb 2020
#Papers We Love
Data streaming with Apache Kafka - guide for data engineers
Are you preparing for a data engineer job interview? Here are my answers to job interview questions about data streaming.
03 Feb 2020
Data streaming: what is the difference between the tumbling and sliding window?
When you start processing streams of events, there always comes a time to decide on how to group them. We have a few kinds of window functions that we can...
27 Jan 2020
I put a carnivorous plant on the Internet of Things to save its life, and it did not survive
This article is a text version of my talk, "I put a carnivorous plant on the Internet of Things," which I presented during the DataNatives conference (November 25-26, 2019 in...
23 Jan 2020
What are the 4 V's of big data, and which one is the most important?
One of the first models that describe what big data is was the four Vs-model. That definition divides big data into four categories (sometimes called dimensions) of problems: volume, velocity,...
20 Jan 2020
10x software architecture: high cohesion
A few months ago, it was fashionable to complain about the 10x developer myth. I agree that such people don’t exist, but, in my opinion, proper software architecture can transform...
12 Jan 2020
How to add dependencies to AWS lambda
The process of adding dependencies to an AWS Lambda consists of two steps. First, we have to install the dependencies in the source code directory. Later, we have to package...
08 Jan 2020
Four books to boost your programmer career
I quit my dream job because of a book
06 Jan 2020
What is the difference between data lake, data warehouse, and data mart
We can easily distinguish between them by focusing on three qualities: data structure (schema), data quality, and ownership.
18 Dec 2019
Three biggest traps to avoid while setting Spark executor memory
What happens when you set the executor memory of a Spark worker which uses YARN as the cluster resource manager? Does it get exactly the amount of memory you requested?...
16 Dec 2019