Experimenting Can Lead to Great Dataviz, Nightingale

Experimenting Can Lead to Great Dataviz, Nightingale

Some may have the idea that creating an amazing data visualization requires the stars to align. But the truth is, as with any other skill, you need practice and real-life experience to teach you what works and what doesn’t. I believe experimenting with dataviz without being afraid of making mistakes can lead to discoveries, new formats in dataviz, new communication methods, or even (re)define the processes in our field. VizHeads.com is a great example of this.

Hi, my name is Leticia, I am a Brazilian, currently Barcelona-based, and I am a co-founder and data strategist atOdd Studio. Odd is a design, science, and tech studio that prototypes data products (algorithms, dataviz, data storytelling…). Most of the time, our day-to-day work can involve complex data science problems for some “big clients.” When you are dealing with topics like COVID, mining safety, or mother and child health in Brazil, India, and Africa, you are always afraid of making mistakes. A small design error can impact millions of people’s lives.

However, that thought has never stopped our team from pushing boundaries. We say we love keeping things “fresh and familiar.” I often see myself looking at a project and thinking, “How can we make this stand out, but at the same time, speak to its audience?” Ultimately, the final decision on a project is always the client’s. This means that even though we love stretching constraints, we still have to be flexible with our original proposals (as probably a lot of you do too). So, being able to experiment outside of the client-focused work, when the only limits are your personal resources and time, can be a blessing.

In 2021, my associate partners and I tried participating in the challenge to visualize the data from the Data Visualization Society State of the Industry Survey, but I was in the middle of a Master’s Degree, with a pandemic brain and burnout. I was sure that our team would be able to come up with an incredible viz for the competition, but it totally flopped. In 2022, we had more people on the team, and our approach was completely different: we looked at the opportunity with absolutely no expectation of making it work, just throwing ideas. And that has been our approach to experimentation ever since: if you expect a specific output or can’t change your project halfway through, you probably won’t be able to experiment.

In 2022, we were well rested from a collective vacation, and there were only seven days left of the competition. Bruno, our lead designer, threw the idea out there and asked who wanted to join, people raised their hands, and we went for it. Here is how we got to the final result.

As I said, the idea was to try new things, so the rules were pretty straightforward: it doesn’t need to make sense or be useful. It needs to be achievable with the data, time, skills, and resources we have, or at least we should be able to come up with a v.0 without any extra investment. If we happened to like it, we would submit the project, if not, we would have learned something, and that was enough.

There are usually two main approaches to data: you can either start by exploring the data for patterns or you can go directly to generating ideas. Since we did not have the time, we went for ideas. Everyone had to quickly study the data and write down their thoughts.

After reviewing our concepts as a group, we each had to sketch three dataviz concepts we liked best, bringing them as close as possible to how they would look like. This is a very important step that helps us quickly identify misinterpretations of data. As well as leave behind things that sound like a good idea, but when you put pen to paper, they don’t make sense. But it can be challenging since it forces you to make decisions you may not have an answer to.

Here are a few of the ideas we had: 

I could go for any of the above, honestly, they are all interesting. Maybe next year?

After voting on our favorites, we tested them with real data. We proof test with open software and even simplified versions of dataviz with data to get as close to reality as we can. There is no way to anticipate if something will work unless you test it. Three ideas were tested with the least amount of data wrangling possible.

Once the results were out, we had a finalist: a website with the heads of the people that were mostly mentioned in the dataviz industry by peers. The size of the head would represent the amount of times they were mentioned. By interacting with the heads, you would get information on our celebrities, such as names, portfolios and so on. Our objective was to mention the question: who is heading the dataviz industry? Pun intended.

But we still needed to figure out the details:

Chaos. Everyone working at the same time.

Designing what the heads and the wireframe would look like (we use Figma), while coding blocks hovering around the screen pushing and pulling each other (p5.js and matter crazy physics), while cleaning the data (I start from the top of the dataset, you start from the bottom, we meet in the middle), while collecting pictures from all the people mentioned (we tried making it automatically, results weren’t great). At this point, every bit of extra time the team had was used to help.

A few anecdotes from this step:

Data cleaning will take more time than you expect; ask for help. We started with 1,760 answers and finished with 421 names (one answer could contain more than one name). We did not leave any message out that we could understand since it was an open field. Not knowing how to write someone’s name correctly does not mean the person answering the survey won’t try anyway. Dear Federica, we feel your pain.

Anticipate and test crazy attempts from your user. The crazier, the more likely to happen. “The user won’t try this crazy thing of dragging all the heads out of the screen”, said our technologists. Yes, they will. I was the first to try – and achieve it.

One crazy interaction at a time is enough. Adding sound to what is already a chaotic visualization, makes it overwhelming (we imagined Alberto Cairo saying “Alberto” every time you would hover over his head, and even tested the sound of people increasing and decreasing when you’d hover over big/small heads – the result was terrible).

New technologies are great, but they won’t perform the same for all users. Make sure you have time to test it on different screens, operational systems, connection speeds, and old and new computers. Loading +400 heads into a screen can be a challenge for different devices.

Prioritizing is key. “Awesome idea, put that in the wish list,” became the catch phrase we said the most in the last days. We had to make some tough choices to filter what were essential features to the main purpose of the project.

Day 5 ended on a high note. Lots of bits and pieces, but thanks to everyone’s effort, things were coming together.

One day before submitting, things looked promising. We pretty much had the data, links to social media, and images collected and organized. The coding team was testing it in all kinds of environments and coming up with solutions to improve performance or make things easier to be rendered in specific systems.

We wanted the final result to be an answer to the question: “I’m interested in dataviz; who should I follow?” And they would just send the link. It could also be a way to recognize fellow professionals, pay homage to people and organizations we admire, or even get to know other references we had not heard yet.

But as I was browsing the final dataset (I had only worked in parts of it) all I could see in the country field was USA, USA, USA, or UK, UK, UK, Germany, Spain, Portugal…and it suddenly hit me: who is representing Latin America?

We had already given up on the idea of using color as a dimension; in all of the design tests, it was clear that it did not add anything to the visualization – it was just extra noise. But once we looked at this variable from a different perspective, we could show how much of our dataviz spotlight is directed at the Global North. More specifically, 91 percent of all the professionals mentioned in the survey. Once we saw that, we decided we had a bigger story to tell.

With the time left, we tested out different ways of adding color to the heads and grouping countries in a way that would show the lack of representation from Africa, Asia, Oceania, Latin America, and the Caribbean. We made sure interactions worked, rewrote all texts, including new information on representation, and decided it needed its own permanent domain. We did not have time to add the final features we wished for, but we are very happy with what we could accomplish in the time we had. We even got 3rd place in the exploratory category ????

By experimenting and wanting to have fun with the survey data, we stumbled upon a bigger question: are we doing enough to uplift our fellow Latin American dataviz experts, or are we just reinforcing the same Global North-centered mindset? We pride ourselves on being inclusive and proactively searching for diversity when hiring, but is it enough?

When I moved to Barcelona, my intention was to work with local companies and the community to show how talented Brazilians are. We often think that we are not good enough. We are taught to think less of what we have and we oversimplify our skills when talking about them. This experiment reminded me of how much we need to be present in these spaces. We need to make an effort when speaking to use examples from our countries, especially when talking to international audiences.

We love Alberto Cairo’s work, but have you heard of researcher Fernanda Viegas? Bloomberg is awesome, but have you seen the work Cafe.art.br does? Sometimes, less represented regions do so much with so little. We need to be appreciative of the barriers we have to overcome and give it the proper recognition.

We cannot change this alone. So this is a call for everyone in the data visualization field to research, recommend, and comment on the next DataViz Survey at least one person from the less represented regions. If you do so, we will make sure their heads are represented next year in VizHeads.com.

Point us to the heads. We will help bring them to the spotlight!

Images Powered by Shutterstock