10 writing hacks to overcome perfectionism and just start spitting it out

image

Last week, I hosted a writers workshop for the whole company. It was something I’d been talking about with Kyle (one of the Keen founders) for a while, but it had been hard to get started. I felt performance anxiety to make it amazing and shower everyone with genius words of wisdom about how to be totally awesome as a writer. That kind of pressure is tough, but it’s so hard to get around it when you’re in charge of something and supposedly an expert.

But then Kyle said something that made it all feel okay: “Don’t worry about it. Just make it a prototype.”

A prototype, I thought. That’s genius. So it doesn’t even count! It’s just for practice. Suddenly I was excited instead of scared. And I realized I had just found inspiration for the theme of the workshop: Perfection Is Poison.

I latched onto this theme because I’d noticed I wasn’t the only one in the company who got tripped up by perfectionism, especially when it came time to write something. Impostor Syndrome, Writers Block, Generalized Creative Anxiety, they’re all close cousins on the neurotic freakout spectrum (NFS).

The question for the writers workshop was: how best to overcome them? Could inspirational ideals be helpful? I guess, maybe. But what about cheap tricks and gimmicks? Could they help, too? I was pretty sure the answer was yes.

So here they are: the top ten writing hacks to overcome perfectionism and just start spitting it out.

1) Call it a prototype. Whatever you’re writing, decide from the start that it’s just for practice. It doesn’t count. No one even has to see it. You have no plans for it. It has no title. It barely exists. So just dive in and type around and see what happens.

2) Pretend it’s an email. You know how it’s so much easier to write in your own voice when you’re just sending an email? Okay, open your email, pick a friend to put in the To: box, start Dear so-and-so, I’m writing up this thing and I just want to make it sound natural, so you’re my secret guinea pig/muse/recipient/conspirator. And you don’t even know it! Then keep going.

3) Have a conversation instead. Kind of like emails, conversations have that way of being so damn conversational. They just can’t help it. Jot down some notes about your idea and get someone to listen and ask you questions. Have them write down the phrases they like best. Or just record it.

4) Break it down. 1,000 words too much to ask? Try just answering a few questions, ideally in pen. (Hand-written always feels lower stakes, especially if your handwriting is horrible like mine.) Who are you talking to? What is the one thing you want them to know? What are three examples/variations/anecdotes about that thing? Get that down and now you have something to aim for. (You don’t have to call it an outline. That’s too formal and confining. Feels too much like middle school.)

5) Share it too soon! Yes, before it’s ready, when it’s still obviously rough, before you’ve had a chance to edit out the best phrase because you decide it’s stupid, and before you’ve worked on it long enough to have any reasonable expectation of “quality,” let alone “perfection.” Chances are your reader will find something great in there to organize around. Worst case, you’ll be able to say, “I only spent an hour on this. What do you expect?”

6) Get a buddy. I know, it’s not the gym and we’re not doing squats or jerks (or whatever athletes do at the gym. I’m a writer; I have no idea). Point is, it’s easier to share something too soon if you’ve got someone else doing the same thing. Mutual vulnerability and accountability.

7) Do a reality check. What’s the thing you’re most afraid of? Name it. Ask someone if it’s rational. For me with my writing, my biggest fear is that I’ll write something lame and everyone who previously admired my writing will suddenly say to themselves, “Oh my god, we were so wrong! You have no talent at all! We were giving you credit you don’t deserve! In fact, you are completely worthless! Not just as a writer. As a human being!” At our workshop, I asked my colleagues if that is how they would respond in the event that I wrote something that wasn’t my best work. They laughed and said, “Of course not!” (but half of them acknowledged they had the exact same fear).

Note: I’m not saying I won’t write something lame. In fact, this might be that very thing. But it’s not reasonable to think that your whole reputation as a writer and human being is on the line every time you write a new sentence.

8) Specify what feedback you want. You know what’s dangerous? Showing writing to someone and saying, “Well, what do you think?” In the absence of any guidance, that reviewer may have a field day on a worm can you didn’t even know was in the pantry. (Triple-mixed-metaphor: 10 points.) If you’re mostly curious how they react to the topic, ask for feedback on that. If you’re definitely sold on the topic but curious about the flow, ask how they think it flows. If you plan to publish this afternoon and want a set of eyes to look for typos or glaring errors, ask for that. Help the reviewer be helpful.

Not sure what feedback to ask for? Try these two questions: 1) What do you like about the piece? 2) What is unclear or unnecessary? (Feel uncomfortable asking those questions? Say you got them from a blog post.)

9) Ignore all this advice. If you’ve got a system that works, fine, do that instead. But then why did you click on a post with this title?

10) Don’t have 10 things in a list. It’s very tempting to go with ten, but it feels inauthentic and forced. Avoid it.

I hope some of this was helpful to you. Full disclosure, this is my fourth attempt to write something to share the most valuable takeaways from our workshop (second attempt written entirely inside a Yahoo mail window). I’m going to send it to the friend I put in the To: column now because it’s still too soon to share it. I’d have to be crazy to send it now.

If you have any sneaky tricks to share, please add them in the comments (assuming I go ahead and post this later). And if you feel like attacking me as a human being, go right ahead. I can take it.

Now please fire up your email and start working on a prototype. It doesn’t count and no one will even know.

image

Kevin Wofsy

Teacher, traveler, storyteller

The Social Authoring Experiment


We are excited to be supporting Keen IO community members in AirPair’s social authoring experiment!

What exactly does this mean? Well, have you ever tracked changes in MS Word and thought “there has to be a better way?!” Us too! AirPair has released some pretty cool features that allow authors and readers to collaborate on posts on github, just like normal code via forks and pull requests. Over the next 12 weeks you can submit posts and collaborate with the community on the best tutorials, opinion pieces and tales of using Keen IO, Firebase, RethinkDB, Twilio, and others in production.

One of our community members, Mark Shust already submitted an awesome post about Making a Keen IO Dashboard Real-time by Integrating it with Firebase & D3.js. You can check it out, leave a review, and make edits!

Submit your posts here.

Have questions or just want to toss an idea by us? Feel free to reach out to us.

A note about rate limits

Our #1 job here at Keen IO is to maintain the reliability and stability of our API. As you might expect, we’ve put a number of limits and protections in place to safeguard the service and make sure existing customers are protected.

Today, we had a couple questions come up about the forms of limiting we’ve employed over the last couple weeks. In all the hubub we may’ve lacked a clear explanation of each, so I’ll give one now. We also updated our documentation related to each of these today. If anything below is unclear, shoot us a question!

Rate Limits

Keen’s had this for a long time. They limit the number of queries a customer can perform within a 60 second window. These limits are enforced at the project level. These limits are pretty high at about 1,000 a minute.

You can see these limits in our docs here.

Note: Rate limiting is soon going to be improved significantly. This may change our limits. We’ll talk about that more when the time comes.

Concurrency Limits

This is a new limit, and it’s enforced per-Organization and has two parts: Only ~24 queries are allowed to be executing simultaneously in each data center Only ~24 queries are allowed to be pending execution in each data center

These limits are documented here.

Concurrency limits are both new and pretty severe right now. We’d love to raise them in the future when we can handle it.

Fast Failure

This isn’t a rate limit per se. It’s a combined guarantee and protection feature. If we haven’t been able to begin executing your query within 10 seconds, we will send it back to you with a failure. This feature is going to get an improvement tomorrow to make it work more like what I just said above. Today it’s allowing some queries to hang around a bit longer.

Fast failure is documented here.

Why Do These Limits Exist Now?

For a number of reasons around performance and system health, these are the limits we’ve put in place to keep averages reasonable given our ability to execute queries. We’d like them to be higher, but they are what they are until we can improve performance within the stack.

Long story short: As new customers stretch the platform new ways, we want to make sure existing customers are protected. We’ve grown really fast and we’re having trouble keeping up. We’ve been working super hard over the last week to get things to a stable place. Now we can start aiming for making it faster. :)

If you have any questions about rate limits, please reach out to us!

Cory Watson

Bigger than a breadbox.

Platform Stability AMA Recap

Peter, Brad, and I hosted an AMA about recent platform instability on February 27th. It was a chance for our community to ask about what’s been up with the API lately and how we’re addressing it. Here are a few highlights from the conversation, fit and trimmed for readability. Editing notes are in brackets. The AMA itself contains the full, unedited versions.


Nima: We’re also an analytics company dealing with billions of events each month, and have 3 layers of redundancy that last up to 60 days in our system – I was surprised to see that you only keep your queues for a week.

Josh: Most data is off of Kafka and persisted to Cassandra in multiple replicas, multiple data centers, within 5 seconds of receipt. When the write path gets backed up that delay can turn into hours, but a week isn’t something we anticipated because we hadn’t considered a failover scenario where data in the previous DC still needed to be ingested a week later. We haven’t decided exactly what we want to do about this. Storing Kafka logs longer is one option but that’s a difference in degree, not kind. My hunch right now is that we’ll build in warnings into the monitoring stack for this a la “Hey dummy, this data is going to be reclaimed and isn’t permanently persisted yet!”


Nima: We’re having discussions on building even more of our analytics on top of Keen (client events -> engineering dash-boarding) – but it’s hard to make that decision due to the recent events; especially when no real plan has been shared with the customers around what you’ll do to address these issues.

Josh: Here are some of the plans we have in store:

  • Index optimization, which will improve query performance and reduce Cassandra I/O generally. Expect to see a document about this on the dev group next week. We’re trying to share more of this stuff, earlier, for transparency but also feedback (we know many of y’all are pretty smart about this stuff)
  • Cassandra and Astyanax / Cassandra driver work. There are a few optimizations we’ve identified that will reduce the % of Cassandra reads & writes that have to be coordinated to nodes that don’t own the row key in question. Big drop in I/O and increase in performance to be gained.
  • Hardware. Kafka, Zookeeper, Storm, and Cassandra are continuously getting new metal.

Nima: So I guess the question is: what are the plans? How will you scale up faster than your customers are scaling?

Josh: The question of how we scale up faster than customers is favorite of mine. Fundamentally I think it comes down to people. Another kind of “monitoring” we have in place is this – “Is Keen a great place to work for distributed systems engineers?” If the answer is “Yes!” than we have the opportunity to address performance and scale challenges with the brightest minds out there. I think that gives us as good a shot as any to keep up with your (very impressive!) growth.


Jody: I’ve heard to write good software, you have to write it twice. What are some of the major things you’d do differently if you had to rebuild keen from the ground up?

Josh: I have heard that too, and I’d say at LEAST twice. Here are a few things I would have done differently [geared specifically toward our distributed Java stack]:

  • Not use Storm’s DRPC component to handle queries. It wasn’t as battle-tested as I’d hoped and I missed that. We spent a lot of time chasing down issues with it, and it’s partially responsible for queries that seem to run forever but not return anything. Thankfully due to the work of Cory et. al. we have a new service Pine that to replace DRPC and all queries are running on it now. It has queueing and more intelligent rate limiting and customer isolation.
  • Greater isolation of write and read code & operational paths. The semantics of failure are very different in writes vs. reads (you can re-try failed reads, but not un-captured writes). To be honest it took us a few bruises in 2013 & 2014 to really drill that into our design philosophy. Most of our stack is pretty well separated today, but we could have saved some headaches and customer impact by doing it earlier.
  • More monitoring, more visibility, earlier. Anything to ease the operational burden across our team during incidents and prevent stress and burnout.

Steve: I would be interested in your dev plans over the next 3-6 months especially on the query API.

Peter: Over the next 3-6 months our dev plans revolve around performance, consistency, and scalability. We intend that these architectural changes set us up for future feature expansion. Doors we’d like to open include custom secondary indices, greater filter complexity, chained queries, and increased ability to do data transformation. I’m happy to share more of my excitement around what the future holds if you’re curious :)


Rob: We’ve been using Keen for a while now and I recommend you guys to everyone I know. It looks like you’re having some success, which is awesome, but with that, issues scaling. How do you intend to improve performance even while you scale?

Brad: Thanks so much for your support Rob. There are a few things that we are doing to be able to improve performance as we scale. We are looking at making our systems more efficient, further down the road looking at new capabilities in terms of how we service queries and writes as the come into the system. We are currently on the second major iteration of our architecture which is a result of looking at how we can better serve our customers.


Nariman: Any updates on the read inconsistency issue? We first reported this in January and continue to see it in production as of yesterday. The problem hasn’t changed since it was first reported, it includes null timestamp and other null properties.

Brad: We have a number of areas which require repair and I have made it through about a quarter of the data as we ran into a few operational issues around doing the repairs. That said, we are now running full steam ahead on the repairs and monitoring them closely and making sure we don’t run them so fast they cause other issues in our infrastructure. This is my top priority and I have all the support I need to get this done.


Marios: You mention that you use Cassandra, are there scenarios where Cassandra’s conflict resolution may result in customer data loss/corruption?

Peter: In the past we ran into issues with Cassandra’s concept of Last Write Wins (“doomstones” and clock drift). Currently, the majority of our consistency issues are due to unforeseen edge cases in node failure scenarios.


Lee: I have been thinking about querying Keen overnight for consistency and replaying (when possible) if this check fails. Do you have a recommended procedure for replaying events over a questionable period? Thoughts on this?

Josh: [the first part of the answer goes into how, this is the 2nd part] Would I recommend it? It depends. If individual events have a very high impact (e.g. financial / you’re using them to bill customers) than if it were me I’d keep my own rolling record of events over some period of time. The Internet and natural disasters and the entropy of an expanding universe can never be ruled out. Most compliance directives & regulations mandate this anyway.

That said, one of our design goals for Keen is that in the general case you shouldn’t have to do that. The amount of write downtime Keen had last year can be stated in single-digit hours. We hate that we have a period with 1-2% inconsistent data, believe me, but mobile apps suffer connectivity issues, people run AdBlock on the web, batteries in sensors unexpectedly die – single-digit error percentages creep into the analytics equation way before data is sent to Keen. The question you have to ponder is a cost/benefit – the cost of building/maintaining verification vs. the cost to you / your business when more error creeps in than usual (because Keen or something else fails in an unexpected way).

There’s no one right answer. It really depends on what you’re tracking, your cost & time constraints, and in some sense your belief that the resiliency of the Internet and downstream services like Keen will get more reliable over time. I believe they (and we) will. Part of why I wanted to have this AMA, and more in the future, is to share the transparency I have into what’s going on at Keen so you have more information with which to form your own beliefs.


That’s it for the quick summaries. You can read all ~35 posts in full on the AMA itself.

I want to pre-emptively acknolwedge that we didn’t get into as much of the nitty gritty tech stuff as I had hoped. There is a still a technical postmortem on the way and I’m happy to use that as a starter for another conversation on the developer group.

A special thanks to everyone who dropped by to chat with us. We learned a lot and your participation was really inspiring.

Josh Dzielak

In-House Open Sorcerer

Net Neutrality and Startups: Talking with the FCC and Congress about an Open Internet

Yesterday marked a huge moment in the fight for net neutrality and an open internet. After years of contention and debate, the FCC voted to ensure that the internet remains an open utility uncontrolled by corporate interests. If you’re anything like us, you’re probably breathing a huge sigh of relief. 

We’re also super proud, though, that a member of the Keen team had the opportunity to speak to members of the FCC and Congress on the subject of net neutrality on the eve of this critical vote.

Two weeks ago, our very own Dan Kador joined a handful of representatives from other startups in a fly-in to Washington D.C. to offer up some perspective on how the policy could affect the startup industry. Here’s Dan with more info on exactly what went down:

How and why did you get brought into this whole thing? The FCC invited a bunch of startups to discuss net neutrality, right? For what reason? What other startups were invited along with you?

Caroline [part of our Advocacy team] sent us an email from a group called Engine. They were looking to send startup folks who are interested in net neutrality to Washington D.C. to speak with members of the FCC, House, and Senate. I connected with them, and they were eager to get a group together before the FCC announced its decision.

Washington is used to hearing from the big guys. Comcast, Verizon, even Netflix – they’re the ones spending garbage bags full of money on this. They don’t get to hear from young companies like ours, and they’re eager to hear our perspectives.

Alongside Keen, we had representation from Vimeo, Etsy, Foursquare, Union Square Ventures, Bigger Markets, Capitol Bells, and Spend Consciously.

What was the itinerary like? Where did you go? Who did you speak to? What was the format? What did they ask you? 

They call it a “fly-in”. We take one day and try to have as many meetings as possible. I flew into D.C. Wednesday night. We met as a group Thursday morning at the Mandarin Oriental, then walked over to the FCC. Our first meeting was with FCC Commissioner Rosenworcel and her staff. We introduced ourselves and then discussed why net neutrality was important to us. Rosenworcel is an ally, so we spent much of the meeting talking about how we could best use our time in DC with people who weren’t as aligned with us.

Then we met with members of FCC Chairman Wheeler’s staff. This meeting was mostly spent on digging into some of the specific details of what the FCC might rule on. Things like interconnection, paid prioritization, and zero rating. Our goal was to have the FCC write rules that were as simple and as clear as possible.

After that, we had a series of meetings with staff of various senators and representatives. In these meetings, we had a few goals. The biggest was to impress upon them that we wanted them to support the FCC’s decision. One big risk is that Congress might wade into this debate and try to legislate. Our belief is that we can get much stronger and better protections if the FCC is left to its own devices. Any legislation that passes the Republican-controlled Congress is going to be worse for net neutrality than what the FCC has now passed.

What were the points that you made in these discussions? Basically, why do you think net neutrality is important – both as an average person and as someone at a startup?

There were a couple important points we were trying to make.

First, that net neutrality is essential for entrepreneurship, especially in tech. Without net neutrality, large incumbents could easily pay to enforce their monopoly over a particular industry. Imagine if Comcast could charge its internet subscribers extra for access to the next YouTube but give access to Hulu for free? It would stifle innovation.

We also wanted to make the point that startups need simple and understandable rules for how to bring complaints. If I have a reasonable belief that I’m being made to pay more for my service’s traffic, I need a quick and easy way to bring that complaint to the FCC and have action taken quickly. Otherwise, there would be a strong stifling effect on innovation. Imagine if the process involved spending millions of dollars on lawyers and several years? No startup could grow under those circumstances.

As an average person, this is really important. One of the main reasons we’ve seen so much innovation over the last 30 years is because of the internet. If it suddenly became an uneven playground, we’d see a lot less growth. I don’t want a small group of ISPs deciding what technology wins and what technology dies. I want consumers to decide. We get that with net neutrality. 

Last, but definitely not least, I think high-speed access to the internet is a fundamental human right at this point and the FCC’s ruling helps protect that.

image

That’s Dan in the middle, with the bowtie!

Anyway, we don’t want to say that Keen helped definitely save the Internet or anything, but we’re super proud that we had the chance to speak on behalf of startups like us. Great job, Dan!

Alexa Meyer

Brand and behavioral design. Learner + Activator. Cheese consumer.

How I Went From 60,000 Readers to 1 in One Year

When I started writing publicly on this blog in 2012, it opened it up a whole new world to me. One of my posts was published in an O’Reilly book. Another became part of the entrepreneurship curriculum at Harvard Business School, where I was invited to guest lecture. Several of my posts got upwards of 50k views. Suddenly, people I met in San Francisco already “knew” me from the blog. Folks wrote me personal emails asking for career advice. Werner Vogels, the CTO of amazon, retweeted one of my pieces to his 65,000 followers. It was incredibly rewarding.

Then… something changed.

When our company scaled from 8 people to 15, I started a draft about how hard that was, and what worked, and what didn’t. After several weeks, half of those ideas were already being disproven. We were going so fast. Defeated, I abandoned the draft and stopped blogging completely. That was a year ago.

But, looking back, it wasn’t my writing that was blocked, it was the sharing that I’d stopped doing.

In fact, after a 10+ years hiatus, I’d started writing letters again, to my now-pregnant childhood friend, back home in Illinois. Writing to her is as effortless as talking.

I told my friend about all the biggest things that were going in my life, which to be honest were 99% company-related. I told her about the teammate we knew we needed to let go, and how we were too chicken to do it. I told her about raising our Series A funding, and my mixed feelings about what that meant to me and my husband, and our company. I confessed about how surprisingly intimate it can be to work closely with people you trust & admire, and how confusing those feelings can be sometimes.

When she told me about how she was finally getting over morning sickness, I told her I couldn’t think about having kids anymore, because I was completely emotionally maxed out. By that point the company was 30 people and I felt personally responsible for all of them. That letter is where I finally admitted to myself how ridiculously I’d over-extended myself. It was such a relief.

When I look back, I’ve learned so much over the last year, and have a lot of regrets about not publishing any of it.

The truth of the matter is, I am fucking terrified of sharing any of this type of stuff publicly. It’s a mix of Imposter Syndrome and outright Fear. 

Here are some examples of thoughts I have:

  • Why would anyone care about what you have to say? Your story isn’t special. Keep your ego in check. 
  • Your writing is mediocre at best. You just got lucky in the past because your friends up-voted your posts on Hacker News.
  • Our company has a brand now. That some people actually know about. Don’t screw it up. 
  • The only reason people liked your posts was because you are a female and they were trying to be nice.
  • People already think the only reason you have this job is because your husband is the CEO. Blabbing about stuff you don’t really know that much about on the internet is not going to help your case.
  • If you write about your company, your friends and family back home are going to think you’re bragging about your success. 

Taken one by one, these doubts are pretty easy to argue against, but the combination of them can feel like quicksand. With practice, we can all learn to quiet these voices, but it takes hard work.

Then there’s the straight-up Actual Risks category of anxieties. This is a special category that disproportionately affects women & minorities in the tech community. It’s the threat of the trolls, the misogynists, the racists, the rape threats. It’s accidentally getting your company DDOS’d because you complained at a conference. It’s the stalker-fan who sends you explicit messages through every channel, and knowing there’s nothing you can do about it except hope they don’t show up on your doorstep. This special kind of fear is a topic for entirely separate article, but I wanted to mention it briefly because it is a very real and true blocker for so many of us, particularly women in technology.

DESPITE all these doubts & fears. I still think it’s worth it to write. 

So, this is me reminding myself:

  • of all the helpful posts I’ve read and how glad I am that people shared them. To pay it forward.
  • of the thank you notes and tweets I’ve gotten, for the stuff I wrote that really helped people. 
  • that unlike a lot of other types of work, content keeps on giving long after you’ve published it, and it always seems worth it in hindsight
  • that I have a responsibility to the tech community. to do my part so that it’s not only men’s voices talking about technology and entrepreneurship. 
  • that publishing has been some of the most rewarding work I have done in my career, and that it’s incredibly fulfilling. that doing it for myself might be reason enough.

Peace & Love,
Michelle

Michelle Wetzler

Chief Data Scientist, Human.

Query Performance Update

It’s been a rough couple of months for Keen IO customers – if you’ve been tracking Keen’s Status Page recently, you’ve probably noticed that we’ve reported a large number of incidents that have negatively impacted query performance for all of our customers.

We’ve been posting follow-up information for individual incidents on the Status Page, but we wanted to take some time to:

  • Acknowledge that there has been a pattern of frequent degraded query performance issues which is not acceptable to us
  • Provide you with more insight into why query performance has been suffering lately
  • Explain what we are doing to improve query performance

Why Has Query Performance Been So Variable?

Unanticipated Query Patterns
We deliberately designed Keen to be a very flexible analytics solution, and provided a set of APIs to Collect, Analyze, and Visualize data. Given the flexibility of the API and the growth of Keen IO, we’ve started to see customers using Keen and running Keen analysis queries that we didn’t anticipate and therefore we don’t handle gracefully.

Hardware/Infrastructure Issues
We’ve had a streak of bad luck with our hardware and physical infrastructure and haven’t done a great job managing their impact on our service: a network outage with our hosting provider, a hardware failure, a Cassandra node failure, and a misconfigured setting on our front-end API servers.

Increased Volume
The use of Keen IO by our customers has grown tremendously over the last several months, and we’ve run into some issues at scale that haven’t presented themselves before. A significant increase in events and queries has caused the platform to misbehave in different and unanticipated ways.

The combination of these issues has put a lot of strain on our platform, and query performance has suffered as a result.

That said, Keen should be able to handle increased customer volume, accommodate unanticipated query patterns, and keep running even when hardware/infrastructure fails. As a platform provider, it’s our job to anticipate and plan for these issues so that our customers don’t have to. We know that we need to do better. And we will!

What We’re Doing to Improve Query Performance

In the Short Term
We’ve started providing more prescriptive guidance on how to optimize queries by publishing a Query Performance Guide. We’re also working on a new version of our documentation to ensure that customers know how to optimally use Keen across the board.

We’re also currently rolling out some new internal tools to our infrastructure that will provide us much better visibility into customer query patterns, allowing us to implement better quality of service for our Analytics API and helping us harden our platform against unanticipated spikes in query volume.

Finally, we’re actively working on revising our query rate limits, and will be providing more detail on this very soon.

Longer-Term Fixes
In addition to those short-term tactical fixes, we’re currently implementing low-level changes to our Cassandra infrastructure to better identify, isolate, and remove performance bottlenecks in our platform.

It’s important to note that rolling out fixes while maintaining an active platform for customers will not be an overnight process. The “timeframe to resolution” on this should be thought of in weeks, not days. We’re shoring up our infrastructure now to harden the platform – not just for today’s requirements, but to keep us up and running well into the future.

Finally

We are really, really sorry for the hassle and inconvenience you’ve had to put up with lately. “Scaling is hard” is an explanation, not an excuse. It’s our job to ensure that what we’ve built can stand up and perform well for all of our customers, regardless of what load we put on it, and we’re committed to keep working and improving so we can provide all of our customers a rock-solid analytics platform that can be depended on.

We’ve hit some bumps in the road. We know how frustrating it is for you and your customers when things aren’t working, and we’re working incredibly hard to make things right. We’re confident things will get better and we hope you all stick with us while we’re making these improvements.

If you have any questions, please reach out to us anytime at team@keen.io.

John Shim

I'm a tech focused "people person"

Updating Keen's API Status Codes

This week we’re pleased to ship some small changes to make our API responses a bit more precise when a problem is encountered. We’re doing this by changing a few of our HTTP status codes. We’ve gone over our SDKs to verify everything will continue to behave as expected, but we wanted to be sure and communicate these changes to the folks who’ve built such wonderful stuff on top of our API: you!

Better Than 400

Most of our errors return a status code of 400. This gets the point across, but it doesn’t necessarily identify the specific problem in such a way that an API user can react accordingly. A developer would need to examine the error message string, which is not nearly as simple as looking at the status code! To that end, here is the summary of changes:

  • Events that are too large (either single or in batch) will now return a 413, representing “request entity too large.”
  • Trying to delete a “keen” property will return a 403, representing “forbidden.”
  • API endpoints that have been blocked due to lack of payment will return a 402, representing “payment required.”

And One More Thing: Query Timeouts

There’s one other change that’s a bit bigger. Query timeouts used to return a 400, but they now return a 503, representing a “gateway timeout.” This is a more correct answer because responses in the 4XX range signal that the client has made a mistake. Queries timing out are clearly a problem with Keen, and we should be making that clear in our responses!

Anything Else?

These changes improve our SDK’s ability to take proper action on certain failures, improve our ability to monitor patterns of status codes, and all-around make our API better. We look forward to continuing to improve our API both in capabilities and correctness in the future. If you have any questions, don’t hesitate to shoot an email to team@keen.io. Thanks for building things with Keen!

Cory Watson

Bigger than a breadbox.

7 Lessons from Heavybit’s DevGuild

I wanted to share with you my takeaways from Heavybit’s DevGuild. It was a great event with over 200 people coming together to focus on developer community-building.

One of the best parts of DevGuild was meeting so many fresh faces who are passionate about Developer Evangelism. Josh Dzielak was the emcee and Tim Falls eloquently gave a talk called, “Measuring developer evangelism…or not!?” Josh always keeps the crowd energized and Tim’s man bun was on point! You can check out all of the talks on Heavybit’s website.

image

So knowledgeable and full of man-bunly goodness!

Some solid learnings:

  • You don’t have to be intimidated or embarrassed about the word evangelism. This word can potentially have some negative connotations but you have control over how you represent yourself. Mainly, you have valuable information to share. Share it with the right people.
  • You never change things by fighting the existing reality. To change something, build a new model that makes the existing model obsolete.
  • Don’t just get measurements. Ask why you’re measuring!
  • “Many of the things you can count, don’t count. Many of the things you can’t count, really count.”- Albert Einstein
  • Don’t underestimate the value of face-to-face interactions to build trust and develop relationships with your network. Building communities is an art. Connect your differences.
  • Big NO NO’s: 1. Not knowing your product very well 2. Drinking too much coffee (jitters) 3. Not buying the beer 4. Being a bridge to nowhere
  • The red sauce on the veggie platter from Bi-rite is spicy and awkwardly tangy

Plus, the event raised over $2735 for a local charity, the Canon Kip Senior Center {WAAAAPPOOWWW!!!}.

I had a blast meeting other Developer Evangelists and I talked to many people who said that they had to really fight to convince their management team that Developer Evangelism is important and should be invested in. We had some great conversations about metrics and tangible items that you can and cannot measure.

The field of Developer Evangelism is developing very quickly. It’s exciting and it makes all the difference knowing you are part of a team you can trust to represent your product in the very best way! Now go out there and share all that amazing information you have in those beautiful heads of yours.

Ricky Vidrio

empath adventurer

Top 12 new features and tools you might not know about

If you’re using Keen, you might have wondered: what’s the easiest way to build a dashboard? Or how can I get the fastest queries? Or how can I set up automatic reports? And maybe you’ve also wondered, where can I go to see all the latest features in one place? Well, wonder no more because here it is!

New API Features

  • Query caching (in private beta) - Get super-fast responses by setting preferences for when to use cached results

  • Custom intervals - Break up query results into any sub-timeframe, such as ‘every_10_minutes’

  • Funnels enhancements - Add actors and optional steps to analyze complex funnels

  • User-agent parsing - If you send Keen IO a user agent string, we can parse it out into browser, device, etc. for you

  • Extraction API enhancements - Use ‘property_names’ param to extract only the relevant properties of your events or ‘content_encoding’ param to compress your CSV extractions for faster download

  • Query Performance Guide - Tips for speeding up query time, extra useful for large data sets

New Query, Viz, and Reporting Tools

  • keen-cli - Quickly run Keen queries, add events, and run maintenance directly from the command line

  • New JavaScript library - More ways to render charts, interact with data, and show query visualizations

  • Dashboard templates for Bootstrap - Super handy when it comes time to make those beautiful dashboards

  • Pushpop - Easy way to make nightly / weekly / monthly email reports and SMS alerts based on Keen queries

New Data Collection and Streaming Tools

  • Streaming to S3 - If you need to analyze your incoming data outside of Keen, we can stream it to an S3 server for you

  • Electric Imp and Tessel Integrations - great ways for Internet of Things (IoT) developers to add analytics to all their projects

We also want to share some happy data from 2014 about our top two priorities: reliability and scalability. Over the past year, we scaled Keen volume by 12X and maintained 99.99% uptime for the year :)

Want to stay up-to-date on all our new features and announcements? Join the Keen dev group, and follow us on Twitter. Here’s to a Keenly awesome 2015!

This is the bar where we wrote this list. (Note: we later discovered we’re better at making lists than playing bar trivia.)

Kevin Wofsy

Teacher, traveler, storyteller