How to do a join on event data

Joins are a powerful feature in traditional static databases that combine data stored in two or more entity tables using SQL. You might use a join to answer a question like “Which customer spent the most money last month?”

A lot of our customers have asked us “Can I do a join with event data?”

The answer is: While you can’t do a traditional join on event data, you can accomplish exactly the same outcome by running a group_by on an event property. It’s pretty cool and very easy!

Here’s how:

First, imagine all the information you might want to join together if you were using a traditional entity database. With event data, all of that information is already there, right inside the event, every single time!

To understand this, let’s take a look at how event data is stored. An event is triggered by a customer or user’s actions, and this event contains data regarding what the action was, when it occurred, and the most up-to-date information about the state of that user at that time.

For example, if you work at an e-commerce company, you will probably want to track purchases. Every time you track a purchase, you can include information about that purchase. Here’s an example of some of the information you might want to track on every purchase and how you would model this event with event data:

purchases = {
   "user": {
       "first_name": "Arya",
       "last_name": "Stark",
       "email": "",
       "id": 22
   "order": {
       "id": "XD-01-25"
   "product": {
       "list_price": 19.99,
       "description": "This is the best Dog Shirt",
       "name": "Dog Shirt",
       "id": 10
   "keen": { // these keen properties are automatically added to each event
       "timestamp": "2015-06-16T23:24:05.558Z", // when the event occurred
       "created_at": "2015-06-16T23:24:05.558Z", // when the event is written
       "id": "5580b0153bc6964d87a3a657" // unique event id

As you can see, every time a purchase is made we are tracking things like:

  • User information
  • Order information
  • Product information
  • Time

This format allows for quick and efficient aggregation querying: that is, the ability to easily derive sums, counts, averages, and other calculations. With this format, we will be able to ask questions like:

  • Which products were purchased most often?
  • Which users have spent the most money?
  • What is the average order value?

We can do this all in one simple query! As an example, let’s ask the question, “What was the most popular product?” Here’s what the query would look like:

What is the most popular product purchased?

new Keen.Query("count", {
    eventCollection: "purchases",
    groupBy: "",
    timeframe: "last_week",

Result: The Mallard and The Horse Shirt are the most popular.

Now, let’s say we want to know which customer made the most purchases last week.

Which user made the most purchases?

new Keen.Query("count", {
     eventCollection: "purchases",
     groupBy: "user.first_name",
     timeframe: "last_week", 

Query Result: Sansa & Stannis tie!

Finally, let’s find out what our total gross revenue is across all users.

What is my total gross revenue?

new Keen.Query("sum", {
     eventCollection: "purchases",
     targetProperty : "product.price",
     timeframe: "last_week", 

Query Result: $439 (not bad for animal-themed t-shirts!)

With entity data, you could use SQL to answer these questions by running joins on multiple tables. To answer the question “What was the most popular product?” you would need to have a users table, a products table, and a purchases table. You would get the same result, but the path to get there would be longer.

In Keen, when an event is triggered you’ll include everything you know about the user at that point in time. This serves as a snapshot of the user as you know him/her. That information can include data about who (their name, username, account number, userid, age at that time), what device they were using, what was purchased, and any other properties you have available. When you’re ready to query, this snapshot becomes incredibly powerful.

If you want to learn more about the difference between Entity Data and Event Data, check out this guide on How to Think About Event Data.

The most important point with event data is to think carefully about the kind of questions you’d like answered when you set up your data tracking. That way you’ll be sure to have the information available when it comes time to query.

To learn more about what to track, and when, check out our Data Modeling Guide.

So which is better: entity data or event data?

Both have their strengths. In general, entity data is best for storing static information about nouns (users, inventory, assets, etc.) while event data is ideal for tracking data related to verbs (signup, purchase, click, upgrade, etc.)

Very often, the questions that are most important to your business revolve around these user actions, and event data allows you to run analytics on them without having to do joins at all.

Learn more about modeling event data

Check out our data modeling guide and sign up for a free account to start playing around with your own event data. Questions? Reach out to us or post a question on Slack!

Maggie Jan

Data Scientist, Engineer, Teacher & Learner

New status page metrics

Devin, one of our platform engineers, recently made a change to our Keen IO Status page. He sent out a great email to the rest of the Keen IO team with a detailed explanation of how and why. Since this was a new user-facing metric, I wanted to share this with the users since it will help users debug, check on our platform’s status, and get a clearer picture on the inner workings of Keen IO. Thanks for taking the time to write this email, Devin! -Taylor

TL;DR We have a new user-facing metric for transparency and to act as an aid in debugging for our engineering teams.

Out with the old

I have updated our status page with a new metric and removed an old one. Previously, we had a metric displayed that showed the “Write Event Delta”, or the number of events that our users had supplied to us for writing that were still waiting to be written to Cassandra. This metric wasn’t particularly meaningful to our users – it is hard to know what 3,000 events waiting meant versus 12,000 events waiting.

In with the new

The new metric is the “Event Write Delay”. This indicator shows how long events are waiting to be written to our data store, Cassandra, in milliseconds.

Event Write Delay graph

On a normal day, Keen events are available to be queried approximately 6 seconds after sending them. We wanted to provide further transparency into the length of time our users will have to wait between writing and reading at any given time, so we added the Event Write Delay metric to our status page.

This metric matters because until an event has been written in Cassandra, it will not show up in any queries. We are displaying the 95th percentile of these delays which is a conservative estimate of how long a customer should expect their events to wait before being available for queries.

The 95th percentile typically hovers around 8.5 seconds over one day’s window, while the 50th percentile hovers around 6 seconds as mentioned earlier. The graph may change when we make a configuration change or experience a relevant incident that could push these delays upward, we don’t expect this to happen very often as we work hard to make sure the event write delay stays consistent!

Who does this impact?

First, our users have better access to company transparency, which is a win. Secondly, our support team can point to this graph to help answer questions about why events are not immediately showing up in queries.

Additionally, this can serve as a debugging aid for the Platform and Middleware teams.

How is this measured?

As events are passed to us, they pass through a “bolt” (a piece of code), which writes batches of events to Cassandra. This bolt is the location where I added some code that will sample roughly every 2000th event that we write. We compare the current time to the keen.created_at property and take the difference. This tells us how long the event waited before it was written to Cassandra. Sampling only 0.05% of our events written still gives us about 3 events every second which I feel is sufficient to produce this metric without incurring any performance costs.

Special Thanks (because regular thanks wouldn’t suffice)

Shout out to Cory for helping with the visualization aspect in the status page and Datadog. Double shout out to Kevin for helping me understand enough of our back-end to make this happen as well as reviewing the code.

We also recently enabled Webhook Notifications on our status page, which you can subscribe to as seen below at This can be super helpful if you are wanting to be notified via a webhook about an incident on our platform. Our goal is to give users as many tools as possible for their toolkit when using Keen IO. -Taylor

Devin Ekins

Engineer. Tells lame jokes. Only sometimes wears a cape.

How we improved our sales workflow with Slack

We use Slack a lot at Keen IO. We’re constantly using and building Slack integrations to improve our workflow. We’re kind of obsessed. We realized we needed a way to aid our sales and customer success workflows on Slack, so we built a tool that lets people type a command that looks like this:


and pulls up a response like this:

The company info is retrieved from Clearbit’s API. This has been incredibly useful for our Sales and Customer Success teams when they need to look up information about a new signup or an existing customer.

We’ve open sourced all of the code on Github. If you want to use this integration for your own company just follow these steps:

What you’ll need:

Step 1: Grab your Clearbit API key
Step 2: Create a Slack Incoming Webhook (you can reuse an existing one)
Step 3: Copy the webhook URL - you’ll need that later
Step 4: Create a Slack slash command. Preferably /company for the command. The URL should point to your Pushpop instance, on the /slack/company path. Copy the Token - you’ll need that later
Step 5: Create a new job in your Pushpop instance, using the company info source.
Step 6: Add all of the environment variables

  • CLEARBIT_KEY is the Clearbit API key from Step 1<
  • SLACK_WEBHOOK_URL is the webhook URL from Step 2
  • SLACK_TOKEN_COMPANY is the slash command token from Step 3

Step 7: Restart Pushpop (make sure you’re running pushpop as a webserver)
Step 8: Type /company into slack!

Person Info

We can also look up information on individual people.

This creates a slash command that will retrieve info about a person (via email address) from Clearbit, and send it back in to Slack.

The person info will look like this in Slack:

How to set up the /person command:

Step 1: Grab your Clearbit API key
Step 2: Create a Slack Incoming Webhook (you can reuse an existing one)
Step 3: Copy the webhook URL - you’ll need that later
Step 4: Create a Slack slash command Preferably /person for the command
Step 5: The URL should point to your Pushpop instance, on the /slack/person path. Copy the Token - you’ll need that later
Step 6: Create a new job in your Pushpop instance, using the person info source.
Step 7: Add all of the environment variables

  • CLEARBIT_KEY is the Clearbit API key from Step 1
  • SLACK_WEBHOOK_URL is the webhook URL from Step 2
  • SLACK_TOKEN_COMPANY is the slash command token from Step 3

Step 8: Restart Pushpop (make sure you’re running pushpop as a webserver)
Step 9: Type /person into slack!

That’s it! Check it out on github to learn more. If you have any questions or ideas of your own drop by our community Slack channel.


Joe Wegner

Open source something

So You’ve Decided to Build Analytics In-House

So you’ve decided to take the plunge and build an in-house analytics system for your company. Maybe you’ve outgrown Google Analytics and Mixpanel, or maybe you’re an early-stage business with unique analytics needs that can’t be solved by existing software. Whatever your reasons, you’ve probably started to write up some requirements, fired up an IDE, and are ready to start cranking out some code.

At Keen we began this process several years ago and we’ve been iterating on it ever since, having successes and stumbles along the way. We wanted to share some of the lessons we learned to help you through the build process.

Today we’ll give an overview of key areas to consider when building an in-house analytics system. We’ll follow up with detailed posts on these areas in the weeks to come.


Before you build your in-house analytics system, you need to consider what inputs will be coming into it, both expected and unexpected. Assuming you already know what kinds of data you want to track and what your data-model will look like, here are a few things to think about:

  • Scalability

  • Traffic variability

  • DDOS

  • Rate limiting and traffic management

  • Good old-fashioned input validation

Each of these concerns needs to be addressed properly to make sure that your users get a solid experience. Most of them go quite a bit beyond checking inputs to a function.

We’ve all heard about defensive programming, validating inputs, and script injection. When you build a public-facing analytics system there are a variety of different types of malicious inputs, not all of which manifest themselves as readily as others. Defending against a DDOS event requires architectural decisions around what is an acceptable load profile. Managing rate limiting is heavily informed by what sort of a business or service you want to run, and is also impacted by the level of service you want to give certain users.

Some questions to ask: Are all users equal? Do certain users somehow need to be treated differently from others? Considering these questions in advance will help you build the right system for your users’ needs.


Today, almost all web applications require developers to select at least one storage solution, and this is an especially important consideration for an in-house analytics system. Some key questions to consider are:

  • What sort of scale are you looking to support?

  • What is the relationship between reads/writes?

  • Are you trying to build a forever solution or something for right now?

  • How well do you know the technology?

  • How supportive is the community?

The better set up you are to answer these questions, the more successful your solution will be.

At Keen we use Cassandra as our primary data store and have a few other storage solutions for rate limiting, application data, etc… We chose Cassandra as our primary store because of its performance and availability characteristics. Another decision point was how well it scales with writes when the data volume gets very large. We will discuss this in more depth in a future post.

Tech Selection

There are more technologies available to developers today than ever before. How do you know which ones will work best for your analytics needs? What OS do you use? What caching technologies?

At Keen we have gone through this process numerous times as we built and scaled our analytics platform. One recent example was selecting the language for two of the systems in our middleware layer: caching and query routing. These are fairly well-studied problems that don’t require bleeding-edge technologies to solve well.

Here are the criteria we used to make our selection:

  • We needed a mature toolchain that would allow us to predictably troubleshoot and deploy our software

  • We needed a language that was statically typed and concise

  • We did not need everyone to have prior knowledge of the language (since we didn’t have an existing codebase to build on top of)

With these factors in mind, we ended up eyeing a Java Virtual Machine (JVM). The toolset is mature, performance is adequate, it is very predictable and has a large set of frameworks to solve common problems. However, we didn’t want to develop in Java as it tends to be overly verbose for our needs.

In the end we decided to use Scala. It runs on the JVM so we get all of the benefits of the mature toolchain, but we are able to avoid the extra verbosity of the Java language itself. We were able to build a few services with Scala with quick results and have been very happy with both the language and the tooling around it.

Querying + Visualization

Once you’ve figured out where your data will live, you will need to decide how to give your teams access to it. What will reporting look like? Will you build a query interface teams can use to run their own analysis? Will you create custom dashboards for individual teams: product, marketing, and sales?

At Keen, we built query capabilities into an API, powered by Storm. The query capabilities allow users to run counts, sums, max, mins, and funnels on top of the data stored in Cassandra. We also built a JavaScript library so users can visualize their queries in charts and dashboards. We wanted to make it super simple to run queries, so we created a Data Explorer - a point-and-click query interface built using React and Flux. It hooks in with our JavaScript visualization library to generate a wide variety of charts and graphs.


Ok, so now your service is up and running, you are providing value to your teams, and business is up and to the right. Unfortunately you have a team member who isn’t particularly happy with query performance. “Why are my queries slow?” they ask.

You now have to dig in to understand why it is taking so long to serve a query. This feels odd because you specifically chose technologies that scale well and performance a month ago was blazingly fast.

Where do you start? In most analytics solutions there are a number of systems involved with serving the request. There is usually an inbound write queue, some query dispatching mechanism, an HTTP API layer, various tiers for request processing, storage layers, etc… It is critical to be able to trace a request end to end as well as monitor the aggregate performance of each component of the system and understand total response times.

At Keen we have invested in all of these areas to ensure we have real-time visibility into performance of the service. Here’s an overview of our process:

  • Monitor each physical server and each component

  • Monitor end to end performance

  • Build internal systems that trace requests throughout our stack

  • Build auto-detection for performance issues that notify a human Keen engineer to investigate further

This investigation process leverages our JVM tools, along with various custom tools and testing environments that help us quickly pinpoint and fix the problem when the system is underperforming.

Murphy’s Law

Yep. This is actually a thing: “If something can go wrong, it will.” Inevitably pieces of your analytics solution will have issues, if not the whole system itself. I touched on this in the troubleshooting section, but there are much larger issues you will need to think through, such as:

  • How are you laying out your servers in the network?

  • How do you deal with data corruption or data loss?

  • What is your backup and recovery timeline and strategy?

  • What happens when a critical team member moves on to another role or company?

Imagine these scenarios. Maybe you were using FoundationDB, only to have it scooped up by Apple, and now you are trying to figure out how this impacts you. Maybe someone was expanding storage and took down all your load balancers because your machines weren’t labeled correctly. Maybe your sticks of memory went bad. Maybe Level3 just went down and took your whole service offline.

These represent just a few of issues you will likely run into as you run your own service. How well you can deal with them will help define how well you can serve your customers.

Stay tuned for more details

Over the next few months we will release in-depth posts covering each of the areas above to help you build a successful in-house analytics system. We look forward to sharing our thoughts and lessons we learned building out our service.

Want an alternative to build-it-yourself analytics?

We went through all the work of building an analytics infrastructure so you don’t have to. Our API’s for collecting, querying, and visualizing data let you get in-house analytics up and running fast, to you give your team and your customers the data they need.

Sign up for free to see how it works, or reach out to us with any questions.

Brad Henrickson

Builder of things.

DataViz Show and Tell

Thank you to everyone who listened, shared, and asked questions at our first Data Visualization Show and Tell. We learned a lot and had tons of fun. We hope you did too.

A big thank you to our speakers:

Keen.Dataset, Dustin Larimer

Drafting and Experimenting with Data to create a Final Visualization, Will Johnson

Your Data Doesn’t Mean What You Think it Does, Zan Armstrong

Github Language Visualizations, Rebecca Brugman
In-product viral growth patterns seen at DocuSign, Chip Christensen

Discovering hidden relationships through interactive network visualizations, Kaustuv De Biswas, Mappr

To stay up to date on data visualization projects and events, subscribe to our monthly dataviz digest :) If you have something you’d like to see featured in our next digest, shoot us an email!

Till next time!

Ricky Vidrio

empath adventurer

Announcing New Docs for Keen IO

We’re excited to announce the release of our new Keen API Documentation. We’ve updated both the content and design of our documentation to make it even easier for you to use the Keen API for collecting, querying, and visualizing your data.

Our new documentation includes:

API Reference: Look up all API resources here in our three-pane API Reference, complete with code samples in cURL, Javascript, Ruby, Python, PHP, Java, and .NET.    

Data Collection, Data Analysis, and Data Visualization: These newly designed overview pages give a snapshot of each of these areas, with quick links to take you to the right resources.    

Guides: Find how-to guides, recipes, and deep-dives into areas such as building funnels, conversion analysis, and user behavior metrics. We’ll be adding lots more guides here to help you make the most of using Keen and to get the maximum value from your data. Stay tuned!    

Quick-Start Guide: If you’re not already a user of Keen, you can get started here. You can also select an SDK from the options outlined on our SDK page, sorted by collection, analysis, and visualization.    

Integrations: Our many integrations with partners such as Stripe, SendGrid, and Runscope are featured here, with step-by-step instructions.

We can’t wait for you to get started using our new documentation and we’d love to get your feedback! Please send your comments to or chat with us on Slack!

Nahid Samsami

Product at Keen. Cat Advocate. Also known as Hidi.

Introducing Analytics Tracking for Arduino

We’ve heard the clamoring and finally we’re proud to announce that we have an Arduino library to send events to Keen IO! If you want to check out the code, its all open sourced here.

To get the creative ideas flowing, I started a sample project using this library to create a dashboard that tracks motion detection from a PIR sensor. The full code for the dashboard and Arduino snippet live here.

Activity Tracker

What are we building?

Have you ever wondered how active you are throughout the day, or if your cats are running around all over your desk at night? I have! What we will build here is a motion sensor hooked up to an Arduino Yún board that sends event data to Keen so we can display it on a nice dashboard.

Components Used

Setting up the Arduino Example

So, Keen IO requires SSL to work, and currently, the Yún is the only Arduino board that supports it. And, to make things even more fun, you have to do a little work with security certs to make it work. There’s a nice write-up on how to do that here.

Once the Yún is configured with the new certificate, it’s time to run the example code to make sure you can send events to Keen IO. One small caveat to the built-in example, since I am programming the board over wifi, I had to use Console instead of Serial to see debug output.

#include {Bridge.h}
#include {ApiClient.h}
#include {KeenClient.h}
#include {Console.h}

KeenClient keen;

void setup() {
  pinMode(13, OUTPUT);
  digitalWrite(13, LOW);
  digitalWrite(13, HIGH);


  while (!Console);

void loop() {

  keen.addEvent("motion_detections", "{\"cat\": 1}");

  while (keen.available()) {
    char c =;



This code will boot up on the Yún, and then send an event to the motion_detections collection associated to your Keen account. If you’re programming it through the USB cable, the Serial object will be what you want to see debug output.

Tracking Motion

Before we write more code, we have to hook up the PIR sensor to the Arduino.

Now, what we really want is to track motion, but there are a few things we have to figure out to do that. First, we have to be able to parse the date and time, which isn’t very straightforward. I found this helpful example, which I then modified to parse out the pieces of a date and time that I would need for my data model.

//if there's a result from the date process, parse it:
while (date.available() > 0) {
  // get the result of the date process (should be day:month:year:day of week:hour:minute:second):
  String timeString = date.readString();

  // find the colons:
  int dayColon = timeString.indexOf(":");
  int monthColon = timeString.indexOf(":", dayColon + 1);
  int yearColon = timeString.indexOf(":", monthColon + 1);
  int dayOfWeekColon = timeString.indexOf(":", yearColon + 1);
  int hourColon = timeString.indexOf(":", dayOfWeekColon + 1);
  int minuteColon = timeString.indexOf(":", hourColon + 1);
  int secondColon = timeString.indexOf(":", minuteColon + 1);
  int nanoColon = timeString.lastIndexOf(":");

  // get the substrings for hour, minute second:
  String dayString = timeString.substring(0, dayColon); 
  String monthString = timeString.substring(dayColon+1, monthColon);
  String dayOfWeekString = timeString.substring(yearColon+1, dayOfWeekColon);
  String hourString = timeString.substring(dayOfWeekColon+1, hourColon);
  String minuteString = timeString.substring(hourColon+1, minuteColon);
  String secondString = timeString.substring(minuteColon+1, nanoColon);
  String nanoString = timeString.substring(nanoColon+1);

  // convert to ints, saving the previous second:
  // int year, month, month_day, day_of_week, hours, minutes, seconds;
  month_day = dayString.toInt();
  month = monthString.toInt();
  day_of_week = dayOfWeekString.toInt();
  hours = hourString.toInt();
  minutes = minuteString.toInt();
  lastSecond = seconds;
  seconds = secondString.toInt();
  nano = nanoString.toInt();

  // Need to make sure we don't send an erroneous first motion event.
  if (lastHour == -1) {
    lastHour = hours;

There’s a lot of nasty boilerplate code in that snippet, but this lets us track the different numbers we need to look at things like active seconds per day, hour, month, etc.

Next, we want to add some logic to the main loop to detect when the PIR sensor picks up motion:

void loop() { 
  if (pirVal == HIGH) { 
    if (pirState == LOW) {
      digitalWrite(13, HIGH); // LED ON to show we see motion.
      Console.println("Motion detected!");
      pirState = HIGH;
      lastActivity = nano;

      keen.addEvent("motion_detections", "{\"motion_state\": \"start\"}");

      while (keen.available()) {
        char c =;

  } else {
    if (pirState == HIGH) {
      Console.println("Motion stopped!");
      pirState = LOW;
      digitalWrite(13, LOW);
      keen.addEvent("motion_detections", "{\"motion_state\": \"stop\"}");

      while (keen.available()) {
        char c =;



  // poll every second

Setting up the Dashboard

I wanted to set up a quick dashboard to track motion, so I took our hero-thirds dashboard starter, and loaded that into an emberjs project (I wanted to learn ember as well). You can see a live demo here.

I played around in the Data Explorer until I found the visualizations I wanted, then added them to the page. The final version of the Arduino code is also available to view.

So with a few simple lines of code and a quick dashboard, you can start tracking some interesting data with your Arduinos on Keen IO!

Have ideas for a project or want to hack around on your Arduinos? Come chat with me in our Slack channel!

Alex Kleissner

Software engineer by day, Partybot by night

How to write a round-closing pitch deck

Last July, we hit a huge milestone when we closed $11.3 million in Series A funding led by Sequoia. This raise was preceded by a few seed rounds. Of course, this was amazing for Keen, but the road to closing these rounds weren’t easy. It took several tries to get to the pitch deck that worked.

Our co-founder Dan Kador recently gave a talk at 500 Startups about the process Keen went through to get to the winning deck (for one of our seed rouds). We hope it will be helpful for anyone going through fundraising mode now, or looking ahead to a future fundraise!

The slides

Do you have any other tips for writing pitch decks? We’d love to hear them on twitter or in the comments below.

Alexa Meyer

Brand and behavioral design. Learner + Activator. Cheese consumer.

How I Visualize My Time Spent Programming

This is a guest post by Alan Hamlett, CTO of WakaTime.*

As a developer and CTO, I’m always looking for new ways to visualize my effectiveness as a programmer. I want to track things like “What was my most productive day of the week?” “What programming languages have I used the most?” “What files do I spend the most time in?”

I use the WakaTime dashboard to see my projects, files, and branch level visualizations over various time ranges, but it doesn’t allow me to create custom queries of my data. I could use the WakaTime api, but I found an easier way…

Introducing wakadump!

I created wakadump to easily export my WakaTime data to Keen IO and take advantage of their powerful data explorer features. For example, I queried all my WakaTime data from the last year and was quickly able to create these custom visualizations:

Most productive day of the week

Top files worked in over the last year

Editor usage over the last year

Programming languages used over the last year

Create your own visualizations

To create your own custom queries using the Keen IO data explorer, just follow these steps:

  1. Install wakadump with:

    sudo pip install wakadump 
  2. Export your logged time from your settings page.

  3. Sign up for a free Keen IO account account and grab your Project ID and Write Key from your project page.

  4. Upload your WakaTime data to Keen IO by running this command:

    wakadump --input wakatime-user.json --output Project ID: XXXX project Write Key: XXXX
    Preparing events...
    Uploading events to
  5. Create custom queries using Keen IO’s Data Explorer

If you find something interesting from your WakaTime data, please do share your insight in the comments or send me an email!

Alan Hamlett

CTO, WakaTime

An Introvert Confesses

I recently went on a couple dates with a super friendly, happy, sociable guy I’d known for a few years. On our second date, he said he had a confession to make: he was actually an introvert.

The confession surprised me, partly because I had a different image in my head of what an introvert was like, but mostly because it made me sad that he thought of it as a confession.

I consider myself an extrovert, drawing energy from the people around me, and I hadn’t really thought about the barriers in our society both personally and professionally for people who just want a little more space to reflect inward.

Why did he feel like he had to apologize?

Based on a recommendation from an introverted co-worker, I looked up Susan Cain, a self-described Introvert. In both her book, Quiet: The Power of Introverts in a World That Can’t Stop Talking and her TED Talk, she describes how American society became increasingly extroverted in the 20th Century due to the rapid growth of cities where people needed to constantly network and prove their worth to strangers.

This trend has only intensified in the 21st Century.

Our modern workspaces favor open seating to foster better communication. Promotions are often received based not just on the quality of work, but also on how well you can present to large audiences. In Lean In, Sheryl Sandberg specifically advises women to raise their hands and make themselves heard in order to become leaders.

Meetings fill up each day of the work week. Brainstorming is frequently a group exercise. And companies often require employees to be in the office during core hours to demonstrate productivity. In 2012, the CEO of Yahoo, Marissa Mayer, created a policy banning telecommuting.

Those constant extroverted activities can drain the energy out of an introvert, and it’s debatable whether they improve productivity for everyone. Thinking and working alone can help people enhance skills, discover new ideas, and create something from nothing.

As an extrovert myself, I know that working alone can sometimes be draining, but I also know that it’s the quiet moments when I’m left alone that I can be most productive (writing this piece, learning to code, choreographing dances).

The world needs introverts too.

Is your work environment good for both extroverts and introverts?

At Keen, a lot of people work remotely, so we try to make it easy to work from home (using Slack to communicate, Google Hangouts for video calls, and GitHub for pulling and pushing requests).

Right now, we can’t control our physical environment completely because our office is in a co-working space, but we will soon be moving into a new space that will have different rooms with varying degrees of isolation and noise levels.

Our organizational structure is flat, which means that being comfortable presenting to large audiences doesn’t enable you to move up; it just puts you in a role that’s best suited for your comfort level.

We also have regular book clubs and writers workshops to bridge the divide between quiet reflection and outward communication, which can both be important parts of ourselves.

What can extroverts learn from introverts?

Susan Cain talks a lot about how an introvert often has to act like an extrovert in order to interact within our society. Perhaps extroverts should also practice acting like introverts.

Keen encourages everyone to work remotely for at least one solid week (#remotematters challenge), but I haven’t done that yet. It may be something to try, not only to better understand what it’s like for remote folks, but also to isolate myself from the buzz in the office.

I once saw a TED Talk by Jason Fried on productivity that encouraged people to have at least one day a week in which talking was not allowed. That would also enable introverts to relax, knowing there wouldn’t be any forced group interaction that day, and enable extroverts to turn inwards on projects best done in solitude.

Interruptions can stress out introverts and decrease productvity

Perhaps I should bring this to my personal life, too. I could disconnect from the internet each night for a week. I could periodically practice Shavasana or some other form of meditation. And probably most importantly, I could make sure to listen to the quiet introverted people in the room.

My extrovert resolution

I want to be more inclusive of introverts because I value what different perspectives can bring and genuinely believe that you need both introverts and extroverts to be able to create and communicate new ideas and projects.

But also, I really like that guy. I don’t want him or others like him to feel that they are confessing a secret. They shouldn’t feel the need to apologize for being introverts just because our culture is one that can’t stop talking.

Maria Dumanis

Good news everyone!