Skip to Main Content
Banner Image

Digital Literacy: Algorithmic Literacy

Algorithmic Literacy

What is algorithmic literacy?

Have you ever wondered how the information you encounter when you scroll through your feeds or explore tabs on social media platforms are curated? Do they really appear by chance? Are all the information you get on your screen objective and representative of all the differing viewpoints there are? Algorithms are not inherently good or bad. The effects of algorithms depend on what they are programmed to do, who’s doing the programming and to what end, how the algorithms operate in practice, how users interact with them, and what is done with the huge amount of personal data they feed on. Given that algorithms are generally invisible – even often referred to as “black box” constructs, as they are not evident in user interfaces and their code is usually not made public – most people who use them daily are in the dark about how they work and why they can be a threat. Algorithmic literacy is needed more than ever now that algorithms permeate our daily lives. Algorithmic literacy comprise being aware that there are inherent biases in computer programming, always evaluating information and not assuming that the highest-ranked information is necessarily the best information, and understanding that sacrificing varying degrees of privacy is a norm when engaging with digital platforms.

Source 1 Source 2

Assessing Algorithms

1. Understand what algorithms are.
Algorithms we mostly encounter are ‘prediction machines’: They are models that use existing data to predict missing data. For example, Spotify will take the songs you have listened to in the past (existing data) to predict what you will enjoy going forward (missing data). Importantly, algorithms only establish correlations between two items – not necessarily causations.

2. Know where algorithms are deployed.
Algorithms are increasingly better (higher accuracy) and cheaper than humans to accomplish prediction tasks. This is especially true in environments that offer a lot of structured data, fairly consistent patterns and only a limited number of possible outcomes. For example, it’s comparatively easy for Google Maps to predict how crowded a restaurant will be at a given time, but quite difficult to forecast rare events such as earthquakes.

3. Understand the intent/goals of those behind the algorithm.
Algorithms are used to solve a particular problem, and their economics help understand the motivations behind those who develop or deploy algorithms. For example, a social network’s main algorithm might not show you posts that are necessarily life-enriching: All they care about is how to keep you on the website, so as to maximise the time and likelihood that you interact with their ads.

4. Take control of your data and privacy.
Algorithms feed on data. Training data is used to create an AI model in the first place, input data used to come up with predictions, and feedback data to refine the model. Some of your data might have already been used as training data, and it is used almost permanently as input data as we browse the web. Privacy regulations such as the GDPR help users gain more control over what data is being used for which purpose, but it’s better to pro-actively think about what data you share about yourself (every single ‘like’ of a statement, post or photo reveals preferences and affination with people, products, ideas and political views).

5. Avoid dependency.
By overly relying on the decision support provided by algorithms, we risk getting dependent on algorithms and organisations that deploy them. GPS navigation works well as long as the US government grants access to the satellites, you have sufficient battery charge on your mobile device, access to map data and clear line-of-sight to the sky. Miss any of these ingredients, and you are back to reading physical maps.

Source

Recommended Reading

How Complex are Algorithms?

Chief Technology Officer of Reddit illustrated the complexity of his platform's algorithm using this GIF of a virtual Rube Goldberg machine:

Source