More Hacking Smart Home Devices with Tasmota, Youngzuth and Gosund

Gosund SW2 Dimmer with wires soldered to serial connections on circuitboard

This is a follow-up from the post Hacking Smart Light Switches and Other IoT Devices… where I installed Tasmota on a Gosund Smart Light Switch (SW1). I also installed replacement software on the Gosund Smart Dimmer Switch (SW2) and the Youngzuth 2-in-1 Switch, and since the process was pretty unique for each, I thought it might be worthwhile to share my experience.

Using TUYA-CONVERT is preferred since it doesn’t require opening up a device or soldering, but it seems like all newer devices are using software that can’t be hacked wirelessly anymore, so you will likely need to open your smart home device. Now, let’s go void some warranties!

Gosund Smart Dimmer Switch (SW2)

This dimmer switch has a nice capacitive touch panel for changing the lighting level, so it feels a lot like adjusting something on a touch screen. Since Gosund also makes the SW1 switch I started with, I was hopeful it would be similar and I could avoid soldering… not so much.

Gosund Smart Dimmer Switch (SW2), wires soldered to the circuitboard to enable a serial connection.

Like the SW1, the SW2 requires a Torx T5 screwdriver to open it. Unlike the SW1, the SW2 dimmer switch has two circuitboards in it, connected by a small cable. Reading about this switch, one person claimed it could not be hacked with that cable connected – this is not true, and I bricked one of these detaching the cable… not recommended. Unfortunately, the serial connections are in the middle of the board, so the process I used with test hook clips would not work like they did on the SW1. However, the connection points are pretty big and well-labeled, so soldering wires to them is pretty easy. Once I had connections, the process was super simple to install new software, exactly like the SW1. It’s nice when things just work!

But, of course, things didn’t just work. When I installed the dimmer the dimming functionality didn’t work from the switch. Looking at the Tasmota template details for the Gosund SW2 Dimmer, this switch requires extra scripting to function properly. However, scripting is not available in the basic Tasmota software, so it needed a different version. Fortunately, once you have Tasmota installed, switching the software is easy and only requires a web browser, selecting “Firmware Upgrade” from the web interface. Unless it isn’t so easy. Trying to install tasmota-scripting.bin from the unofficial releases failed, and first required installing the tasmota-minimal.bin to get the smallest install and then installing the compressed version of the unofficial release, tasmota-scripting.bin.gz (only the .gz version would install successfully). I used the OTA (over the air) install for the minimal software (pointed to the official OTA releases), and manually uploaded the scripting gzipped binary downloaded from unofficial experimental builds. Once installed, there are new menu options in the web interface, “Configuration” -> “Edit Script”, and simply paste and enable the script from the template page. None of this was complicated, but is also wasn’t very obvious… hopefully I can save you some trial and error.

And, the switch works great and immediately worked with Alexa (make sure emulation is set to “Hue Bridge” to enable Alexa to use the dimming functionality.

Youngzuth 2-in-1 Switch

Youngzuth 2-in-1 Switch with wires soldered to make a serial connection to the TYWE3S.

The Youngzuth 2-in-1 Switch is actually two switches that fit into the space of a single switch. When I opened the switch (Phillips head screwdriver) and started looking around the circuitboard, I couldn’t find any connection points for the serial interface. I finally hit the point I had been dreading… needing to solder directly to the chip.

The Youngzuth 2-in-1 uses a TYWE3S package and fortunately a lot of details are available on the Tuya Developer website, so it was pretty easy to figure our the chip connections. I really hate soldering, especially on tiny components next to other tiny components, so I had a margarita to steady my hand.

TYWE3S pin connections, colors showing all pins needed to reprogram

Once wires were connected, installing the software was a breeze. Configuration was also easy, with an example provided in the Youngzuth 2-in-1 template.

Full disclosure, I have not yet installed the Youngzuth switch, as I made a rookie mistake, not realizing there is no same-feed neutral connection at the switch location. Once installed I will post an update if anything required extra work.

If you have any questions or different experiences with these devices, please leave a reply below!

Hacking Smart Light Switches and Other IoT Devices

Gosund SW1 circuitboard with test hook clips on serial connections

If you’ve ever had a free weekend, a desire to create a more secure smart home, and questionable judgment, you’ve come to the right place. In this post I’ll talk about how to take common IoT (Internet of Things) devices and put your own software on them.

Disclaimer: depending on the device, this exercise can range from pretty easy to drink bourbon and slam your head against the desk difficult. Oh, and there is some risk of electrocuting yourself or setting your house on fire. So everything after this point is for entertainment purposes only…

Why Hack Your IoT Devices?

Most people creating a smart home take the easy path… pick out some cheap and popular devices on Amazon, install the smartphone app to configure it, and are good to do. Why would anyone want to got through the extra effort to hack the device? There are a few good reasons:

  1. Security: With few exceptions, most smart devices require installing an app on your phone, often times from an unknown vendor and with questionable device permissions needed. The devices themselves are tiny, wifi-connected computers, and also have software that is updated by connecting to a server in some country, and installing new software on the device connected to your home network. Having a cheap device connected to your home network that requires full access the Internet to work is bad, but it is worse when that software can be changed at any time, to do whatever the person changing it wants it to do. This could turn your light switch into part of a botnet, or worse, be exploited to attack other devices on your home network. By hacking replacing the software, you create a device that works properly without ever needing access to the Internet, lowing the security risk. You can also see (and change) exactly what software the device is using.
  2. Sustainability: Since the devices require communicating with an external company for configuration and updates, when that company stops supporting the device or worse, goes out of business and turns off their servers, your device becomes useless or stuck in its current configuration forever. By hacking replacing the software, you are able to support the device even if the company ceases to exists. And by using open source software with a robust community, you will likely have very long term support.
  3. Because I Can (mu ha ha ha): Okay, this is more of a fun reason, but worth mentioning. I’ve generally been much happier with the hacked versions of my products, whether it be my Tivo, Wii, or car dashboard. Smart light switches are a relatively low-risk hack, as they are inexpensive, and I’m assuming the risk is turning it into a brick, not causing an electrical fire (I’ll update the blog if I have an update on that).

Getting Started

My adventure started with the spontaneous purchase of a Gosund Smart Light Switch. Like a gazillion IoT devices sold by name brand and random manufacturers, this switch is controlled by an ESP8266. Most of these ESP8266 devices use a turnkey software solution made by Tuya, a Chinese company powering thousands of brands from Philips to complete randos.

For security and sustainability reasons, I decided I didn’t want this switch connected to my home network, and even if I wrote complex network firewall rules to limit its access, it would need to connect to the open Internet and other devices in my house to work properly.

I did some research and found Tasmota, an open source project that replaces the software on ESP8266 or ESP8285 devices, eliminating the need for Internet access and enabling functionality that make them easier to connect to controllers like Amazon’s Alexa. The older examples required disassembling the device and soldering to hack it, which is exactly not what I wanted to do. However, more recently there was an OTA (over the air) solution that didn’t require opening a device at all, and did all of the hacking over wifi… that sounded great.

Tasmota Wifi Installation

When I tinker I like to use a computer that I can reset easily so that I don’t have to worry about an odd configuration causing problems later. I have an extra Raspberry Pi that is handy for this, and installed a clean version of the Raspberry Pi Desktop to install on an extra Micro SD card.

I installed TUYA-CONVERT, which basically creates a new wifi network that and forges the DNS (how computers translate a name like tuya.com to numbers that identify a server) to resolve to itself rather than the Tuya servers, so that when the device goes to get a software update from the mothership, it gets the Tasmota software installed instead – hacking complete.

Gosund Light Switch In Dangerous Setting
An example of poor judgment, however the red load wire is capped, as that is a not good wire to touch when the switch is on.

I started running the tuya-convert script on my Raspberry Pi and, rather than go through the full process of installing the switch in the wall, I found a standard PC power cable (C13) was the perfect size to hold the wires in place or allow testing on my desk. DO NOT DO THIS – I am showing you only as an example of what a person of questionable judgment might do. The switch powered up and on the tuya-convert console I could see it connecting and trying to get the new software! I love it when things just work.

But then, it didn’t work. While there was a lot of exciting communication happening between Raspberry Pi and the switch, ultimately the install failed. Looking at the logs, I was getting a message “could not establish sslpsk socket“, and found this open issue, New PSK format #483. Apparently, newer versions of the Tuya software require a secret key from the server to do a software update, and without the key (only known by Tuya), no new software will be accepted. So, damn… these newer devices can’t use the simple OTA update. Also, if you have older devices, do not configure them with the app it comes with if you plan on hacking, as that will update them from the OTA-friendly version to requiring the secret key.

Tasmota Serial Cable Installation

I realized I was too far down the rabbit hole to give up, so it was onto the disassembly and soldering option. The Tasmota site has a pretty good overview of how to do this, although I thought a no-solder solution would be possible, and tried to find the path that requires the least effort (yay laziness).

Gosund Light Switch Circuitboard
Gosund light switch SW5-V1.2 circuitboard, pen for scale. The connection points are the six dots towards the top, running down the right side (zoom in for labels).

Opening the switch required a Torx T5 screwdriver (tiny, star-shaped tool), and I happened to have one laying around from when I replaced my MacBook Pro battery. Looking at the circuit board, I realized that very tiny labels and contact points, combined with my declining eyesight, made this a challenge. I took a quick photo with my Pixel 4a and zoomed in to see what I needed… the serial connections on the side of the board (look for the tiny RX, TX, GND, and 3.3 labels… no, really, look). While soldering would be the most reliable connection, I was hoping test hook clips would do the job.

Since I was already using a Raspberry Pi, I didn’t need a USB serial adapter, as I could connect the Pi’s GPIO directly to the switch. Again, the Tasmota project has a page giving an example of connecting directly to the Pi. Whatever method you use, it is critical you connect with 3.3V, not 5V, and the higher voltage will likely fry the ESP8266. If you have a meter handy, check and double check the voltage. And, if you’re using the Raspberian OS, you may find /dev/ttyS0 is disabled… you will need to add enable_uart=0 to your /boot/config.txt file and reboot.

I connected the switch directly to the Raspberry Pi. There ware several things annoying about this, starting with each time the switch is connected to the 3.3V, it reboots the Pi. And since almost every command to the switch requires resetting its programming mode through a power cycle, that means rebooting the Pi frequently (fortunately it is a fast boot process).

Test hook clips connecting the Raspberry Pi to the Gosund switch worked surprisingly well.

The good news is, the test hook clips worked, which was a bit of a surprise. I added a connection from Pi ground to switch 00 (green wire in the photo), as that forces the switch to enter into programming mode at boot (it is okay to leave that connected during the hacking process, or you can detach it once it is in programming mode). I made sure everything was precariously balanced to add excitement and more opportunities for failure into the process. I was able to confirm that I entered programming mode and had access to the switch by esptool, a command line utility for accessing ESP82xx devices. Success! 🎉

The bad news is, other than being able to read the very basics from the switch, like the chip type, frequency, and MAC address, pretty much everything else failed. And, each successful access only worked once and then required a reboot. I was unable to upload new software to the switch. After researching a bit, the best clue I had was problems with voltage drops on homemade serial devices, and wiring directly to the Pi circuitboard seemed like it might apply. At this point I needed a drink, and went with a nice IPA.

But hey, once you’re this far down the rabbit hole, why stop? I decided to try a more traditional serial connection, using a CH340G USB to serial board.

Serial Killer Part Two

Apparently there was an issue using the Raspberry Pi directly for the serial communication as the USB to serial adapter worked perfectly. I validated the connection using esptool and then used the tasmotizer GUI, which makes it easy to backup, flash, and install new software on the switch. Many steps require rebooting the switch to proceed to the next step, but that is as simple as unplugging the USB cable and plugging it back in (even better that it isn’t triggering a reboot of the Raspberry Pi each time).

Tasmotizer and the default web interface to configure your newly-hacked switch

Once the new software is installed, there is one final reboot of the switch (don’t forget to disconnect the ground to 00 or else it boots back into programming mode). At this point the switch sets up a wifi network names tasmota[mac] where [mac] is part of the mac address. Connect to this network and point your browser to http://192.168.4.1 and you are able to configure your device. Set AP1 SSId and AP1 Password to your home wifi, click “save”, and a few seconds later your switch will be accessible from your home network.

I’ll provide the details of configuration in a follow-up post, but I used the Gosund SW1 Switch template following these instructions to import it, and turned on “Belkin WeMo” emulation to make the switch automatically discoverable by Alexa, without the need to install special apps on my phone or skills on Alexa. The configuration process and connecting to Alexa was incredibly easy and took less than 5 minutes.

Update January 2, 2020: I added a post on hacking the Gosund Smart Dimmer Switch (SW2) and the Youngzuth 2-in-1 Switch, each of which required a different technique.

If you’re curious about attempting this yourself, have questions about my sanity, or have other experiences hacking your smart devices, I’d love to hear from you – please leave a reply below!

Migrating Back to Windows, a High DPI Tragedy

After several years of waiting for Apple to release anything inspirational as a replacement for my Early 2015 MacBook Pro, a failing keyboard finally pushed me over the edge to purchasing a Dell XPS 13 Laptop. This is my initial experience moving back to Windows after 8+ years… since PC hardware options are nearly infinite, I am focusing on the experience going from macOS Catalina (10.15.6) to Windows 10. That said, so far the XPS 13 hardware seems amazing, even compared to a modern MacBook I use for work.

Getting Started

The initial setup with network and account was really smooth, very approachable. If anything could be better, I tend to use extremely secure passwords that are not easy to enter reliably, and before any password managers can be installed this is a manual process. I would love to see a solution that would use the camera to scan a QR code and have the password app from a phone generate the QR code (please, steal that idea everyone).

Once I made it to the desktop, I found the touchpad controls jarring… I can’t fault Windows for this, all of my desktop navigation is Mac OS muscle memory. I found various settings to ease my journey. And, getting used to the menus, and how apps are listed is a learning experience… pretty sure I’m doing it wrong.

Extended Desktop

I spent a lot of time searching because I could not believe this was the non-broken behavior… with multiple monitors, dragging a window between monitors of differing DPI is a tragedy and in some cases a strategic exercise to get the window usable on another monitor. I’m not sure how any designer got this so wrong, apparently the window does not scale to maintain the proportional size, instead switching to the new size when the window is 50%-ish onto the destination monitor.

Windows 10 Extended Desktop broken

Spotify going into giant-mode as it is moved to my external monitor.

This experience is, to say the least, jarring. If you are coming from a Mac, you are used to the window maintaining its size even when traversing monitors of varying sizes and DPI (and this is a relatively simple bit of math to make this work properly on the engineering side). The odd part is, once the window is fully transitioned to the destination monitor, it snaps to a size that matches the source monitor. In some cases it becomes nearly impossible to drag the window because the gigantic, expanded version results in a window that can’t make it 50% of the way to the destination monitor, so it needs to be resized (sometimes multiple resizes) to work.

UI Size Compatibility

This is another problem that makes me wonder how the average consumer is going to know how to make things work… some programs, even modern ones, don’t render their UI properly unless you modify settings in a Windows 95-era system dialog. For both Gimp and DaVinci Resolve the UI was unusable on install.

Microscopic UI on Gimp

Gimp UI as default. This screenshot is extremely generous as it was a small window. However, the brush icons are about 2 millimeters wide. The rest of the UI is overlapping text.

The solution for this is cryptic. The user must find the application executable digging through the bin folder, and see the “Change high DPI settings” button.

Of course, I should need to set high DPI on a per-program basis….

And in this settings dialog there are additional, non-obvious options for making the UI work properly.

And, even more obvious is you should use High DPI scaling override to select “System”.

On the bright side, I was able to get these programs to render properly with a usable UI (although DaVinci Resolve is a great example of a window that is almost impossible to move to another monitor based on the extended desktop problems mentioned earlier).

Crashtastic Browser Tabs

It is possible that this is not Windows, but my initial research suggest this problem is specific to newer versions of Windows 10, at least 64-bit, and happens in (at least) Chrome and Edge browsers. Browser tabs seem to crash frequently.

After 45 minutes, five browser tabs crashed with the error code STATUS_BREAKPOINT.

Since I have read reports of this in both Chrome and Edge, it is possible this is a bug in Chromium, which they both share.

Is it Me?

I am open to the possibility I am doing something horribly wrong. Honestly, I would love for somebody to p0wn me, and let me know how I missed the obvious “don’t do absurd stuff” checkbox in the setup process. However, I am sort of handy with computers and from looking around, many people are experiencing the same issues… And even if I missed something, for a great consumer experience, this should just work.

If it seems like I’m being a little critical based on my first 48 hours, that’s because these friction points are consuming a lot of my time. I expect adjusting to different UI controls, but I don’t expect having to fix clearly broken behaviors right out of the box, using all modern software.

Otherwise, Windows 10 looks like it has caught-up and possibly surpassed MacOS in many ways. I’m looking forward to getting past the broken glass a barbed wire so I can start appreciating the rest of the experience.

If you’re a wizard with Windows and have some sorcery to solve these problems, please leave a comment and I will shout your praises.

Update August 31, 2020: I installed the 32-bit version of Chrome and it seems to have slightly reduced, but not eliminated, browser tabs crashing (super subjective observation).

Update September, 2020: I gave up and when back to a MacBook Pro. The Dell laptop went to a friend, and eventually Dell had to replace the motherboard, which seems to have solved the random failure issues (but none of the UX/UI issues, obviously). I’m loving my new MacBook Pro, even though I was probably the very last person in the world to buy an Intel MacBook since the M1 was released about 15 seconds after my purchase.

Google I/O 2019, Some Exciting Bits that Were not Obviously Exciting

Over the last couple of days I’ve been looking at the various product announcements that came out of Google I/O 2019 and there were a couple of themes that got me pretty excited about where Google can go and how that can make pretty a positive impact on millions of people.

Creating Opportunities for People… All People

I loved the Google Lens announcements from Aparna Chennapragada because the application of the technology can make such a huge difference in people’s lives, and not just the people I typically see in wearing fleece vests and sipping cold brew coffee Silicon Valley. What was most compelling to me was the transcribing / Google Translate integration that was demonstrated, especially when combined with the processing being done on device (not cloud), and being accessible to extremely low-end ($35) devices. Visual translation was always a very cool feature and, when I was trying to figure out menus in Paris, I was happy to have the privilege of a high-end phone and data plan. Making this technology widely accessible enables breaking down barriers created by illiteracy, assisting the visually impaired, and helping human interactions in regions with language borders.

Google also announced Live Caption, where pretty much every form of video (including third party apps and live chat) can have real-time subtitles. This is also done on-device, and works offline, so it can be applied to live events, like watching a speaker at a conference. A shoutout to my friend and former colleague KR Liu for her work with Google on this project, that makes the world far more accessible to people with hearing challenges.

Also notable, Google’s Project Euphonia is making speech recognition more accessible to people with impaired speech.

Movement Towards Device vs. Cloud

The “on device” and “offline” features I mentioned (and were part of other announcements like Google Assistant improvements) are important because of the implications they have in making the technology available to everyone, and also because of the personal privacy that capability will enable.

Of course, my data, Google’s access to it, and personal privacy is a much larger, complicated conversation… for now I am going to focus on possibilities, not challenges.

For years there has been a move for all aspects of people’s lives to be captured and collected in the cloud. There are many reasons this may have been necessary, from correlating data to make it useful, raw computer processing power requirements, over-reaching policies, and business models requiring all the things to win. Once in the cloud, personal information can be used for purposes never imagined by the consumer, including detailed profiling, sharing with third parties, accidentally leaking to malicious parties, revealing personal content, and various other exploitations that can negatively impact the consumer.

As the processing stays on your device and does not require transferring data off of your device, it enables products that can still provide incredible benefits while also being respectful of customer privacy. This is exciting as there are product opportunities in areas like personal health (physical and mental) that will likely require deep trust and protection of consumer information to gain wide acceptance and benefit the most people.

Personal Assistant of My Dreams

And something I am more selfishly excited about…

For several years I wished that all of the products in Google would integrate with each other and eliminate almost every manual step I have to organizing my day. I am going to side-step the discussion about how much data a company has about an individual and say that I intentionally choose to trust my information with two companies (Google being one), because of the value I get from them. I use Google to organize most aspects of my life, from email communication to coordinating my kid’s schedules, video conferencing, travel planning, finding my way around anywhere, and almost every form of document. As a result, all the parts of Google know a lot about me. But still, when I send an email to setup a meeting, I usually need to manually add that to my calendar and then I also need to add in the travel details (I frequently take trains instead of driving)… it’s a couple of extra minutes that I could be spending on better things, or just looking at pictures of cats on the Internet.

With the progress of Google Assistant and Google Duplex, I am seeing a path where administrivia is eliminated, where email, text messages, phone calls and video conferencing can also provide inputs that guide this assistant into organizing my life behind the scenes… Action items discussed in a Hangout can automatically result in a summary document, a coordinated follow-up lunch, optimal travel details, and a task list.

There is an obvious contradiction between my excitement for the announcements that emphasize better human outcomes and my “let Google know all the things” excitement over a personal assistant, but again, this is about my personal, intentional choice to share data vs. products that mandate supplying personal data, often far in excess of what is necessary to deliver the product or service.

There were some other “that’s cool” announcements, and I’ll probably be buying a Pixel 3a, which seems like a great deal for the feature set, but overall I’m more excited about the direction than the specific products showcased.

Empathy Driven Metrics

Social networks, online communities, and social media are services we use because of the promise they offer to strengthen relationships with other humans. However, these services frequently fall short of that promise, sometimes harming the relationships they were meant to support. In many companies, delivering a negative customer outcome results in business failure, but for many social companies, negative customer outcomes are producing positive business results for product teams because the business success metrics are not aligned with customer success.

Or, maybe the metrics are perfectly aligned with customer success, but unfortunately, end users are not the customer. The argument, “If you’re not paying for it, you’re not the customer; you’re the product being sold” explains the poor outcomes for end users resulting in positive business results from customers (typically advertisers). I believe a great number of employees in these companies do think of you, the end user, as their customer, but the systems in place to validate a successful outcome fail to reinforce the importance of the customer’s needs outside of the business objectives.

It is common to hear social companies talk about being “customer obsessed”, and I have met plenty of Product Managers that genuinely care about the end user as their customer. But how many companies translate this obsession into their performance metrics to deliver an outcome that is truly successful for the customer? How often do you see companies reporting objectively measured progress towards delivering customer well-being? Engagement metrics like daily active users, ads watched, shares, retention, number of posts, and time spent in app are all very common… but without consideration of customer well-being, what do engagement-driven metrics deliver in a social product that if fundamentally about human relationships?

Show me the incentive and I will show you the outcome.

Charlie Munger

Worse Human Interactions

Many of the negative customer outcomes so many people experience correlate with a positive result for the companies creating the product. Disagreement, anger, and outrage all drive activity and engagement… since last week your posts increased 23% and your time spent in app is up by 8%, but you’ve also unfriended uncle Ned because he keeps posting fake political stories about your favorite candidate, and you disinvited your extended family from Thanksgiving.

But even positive content combined with effectively scorekeeping popularity through shares and likes, can lead to worse outcomes and lower self esteem as people tend to post their best moments, creating the perception that everybody else’s life is amazing, while you do laundry, eat leftovers, and watch Netflix alone.

Worse Decisions

Humans have many cognitive biases, error patterns in the way we think, leading to irrational decisions. Online we are regularly influenced by an availability cascade, overwhelming our critical thinking by making obscure or even crazy ideas seem rational as they are repeated and seemingly reinforced as widely accepted when we witness more and more people supporting the idea.

You watch one video because you are amused that a guy thinks the Earth is flat, and then your recommended feed is showing more support for his argument. Based on what is being presented to you, there seems to be a lot of support for this flat Earth idea. What seems like an obscure initial video you watched thinking it’s ridiculous that this guy thinks the Earth is flat has led you down the rabbit-hole of conspiracy videos, and you’re starting to think there might really be two sides to consider in this whole chemtrail thing, but good news, you’re watching 13 more videos and 72 more minutes than you did last week!

The poor outcomes don’t stop with the individual, they are reflected in negative outcomes for society overall. Misinformation about vaccines continues leading to a reduction in vaccination rates and new outbreaks of mostly-eradicated diseases. Unfortunately, sensationalized false claims can go viral quickly, while corrections get a small percentage of the original article, so the fake information gains a substantially larger public mindshare.

Balancing Business Metrics with Customer Empathy

For many businesses, validating successful customer outcomes is relatively straightforward… reducing their cost per widget, increasing their leads, reducing time spent in a business process are all objective benefits. But for products that are fundamentally about human relationships, a successful customer outcome is more subjective, but by most definitions of healthy relationships, is not based on dependency, quantity of consumption, or other common assessments of engagement.

What metrics might a company consider if customer well-being were a consideration in the successful customer outcome? Factors like happiness, growth, confidence, personal enrichment, support, safety, and fulfillment seem like good candidates. In customer interviews, this would also mean understanding the real answer to the question, “How do you feel after using our product?

Customer Well-Being is Measurable

The subjective nature of metrics like “customer happiness” presents a challenge. However, technology is reaching a point where it is becoming possible, at scale, to more objectively answer the question, “how does my customer feel?”. Sentiment analysis of text has matured considerably, and can be used understand customer. Similarly, emotion recognition of voice and visuals can provide insights into the immediate reactions. Technologies like these are being applied to problems predicting depression from written text and speech. Wearables with biometrics are becoming increasingly common and also provide an opportunity to assess the physical impact from online interactions.

Further reinforcing that measuring customer well-being is possible, in 2018 the New York Times piloted ad placements based on the emotions certain articles evoke. However, like many current applications of sentiment analysis, this use case emphasized the value created for the advertiser, focusing on targeting the customer with premium-priced ads when the customer is in an emotional state that is optimal for the advertiser. The examples cited targeted upbeat, inspired customers, but it is easy to imagine the same technology could be used to target customers that are upset, reactionary, and likely more susceptible to radical suggestions. In other words, perfect for divisive political targeting.

An encouraging example of prioritizing customer well-being comes from Dan Seider at Stigma, using input from webcam images, regularly processed by artificial intelligence to understand online consumption impact on happiness. If this type of customer data can be secured (likely requiring it to never leave the customer’s device), this technology could lead to solutions that help people understand how their online habits are benefitting or harming their well-being. While empowering individuals with these sort of tools is great, it represents third-parties trying to provide protections from social products, rather than social companies considering customer well-being as part of their product success.

Codify Better Social Outcomes

From a business results perspective, there is little need for the current social giants to change. A couple of times a years we see news surface where customers are outraged by being exploited, manipulated, or endangered, a CEO repeats a statement about fixing things, and the market value of these companies generally continues to increase in spite of these problems.

I believe many CEOs are sincere in their desire to eliminate the social problems manifested in their products (I mean, who wouldn’t want that to go away), but I don’t see this desire supported with how the company objectively assesses success, and I am skeptical we will actually see improvements until customer well being metrics are considered alongside of engagement metrics. A commitment to results requires measurement, and cultural integration into what is considered success, from product performance to employee incentives. If you don’t track it, you probably don’t really care about it.

For earlier stage social products and companies with a commitment to better customer outcomes, it is easy to assume that strong product leadership holding this commitment is enough to stay on that path. Codifying what a better social outcome means will help make the path clear when there are inevitable product tradeoffs between short-term gains vs. long-term enduring value for customers. As new employees join the company they will see values like “we love our customers” not just as words painted on the wall, but as a requirement for success.

Does your product team include customer well-being as a desired outcome? I’d like to hear more, especially how success is measured – please leave a reply below!

Credits
Kids in Field on Laptops image by Unknown, via Pxhere
Blockhead Toy image by Unknown, via Pxhere
Girl on Playground image by Unknown, via Pxhere
Computer Draining Man image by Unknown, via Pxhere
Excited Kids on Laptop image by Unknown, via Pxhere

Make an Antique Garage Door Opener Internet Capable

I have an early 1990’s garage door opener that does all of the things you need a garage door opener to do (it… opens the garage door). However, the remotes are the size of cinder blocks and I never have one with me when I need it, so I decided to find a way to use my phone instead. This project is part of a long history of unnecessarily connecting items in my house to the Internet.

Requirements

  • A janky garage door opener, ideally the kind with wired switches attached to your garage wall
  • Some form of a server… nothing powerful. A $50 Raspberry Pi is about 50x more powerful than you need
  • A relay controller. For this project I happened to have a CanaKit UK1104 USB relay controller laying around
  • Some wire to connect from your server to the garage door opener, CAT5 is overkill and works great
  • A patient / forgiving significant other

Installation

  1. Wait for your significant other to leave the house for at least 90 minutes.
  2. Connect the relay controller to your server
  3. Grab my Garage-Door-Controller code from Github and copy it into the html directory of your server. In includes PHP and Perl scripts, the best programming languages 😜
  4. Install the Perl package Device::SerialPort. On Ubuntu / Debian: sudo apt-get install libdevice-serialport-perl
  5. Make sure the script can access the serial device… On Linux, you can add the web user www-data to the dialout group, or if you want a less secure option, use visudo and add this line: www-data ALL=(root) NOPASSWD: /var/www/html/garage/garageinterface (use the path for your server)
  6. Make sure the file garageinterface is executable, chmod a+x garageinterface
  7. Run a wire from the relay 1 on the controller to the same terminals on your garage door that the buttons on your wall are connected to (you can leave those wires in place, too… no need to make the buttons not work). On your relay, the wires should connect to “COM” and “NO” (common and normally open)
CanaKit UK1104 wired to an antique garage door opener

Opening Your Garage Door

When connected to the same network as your server, simply point your web browser to /garage and the magic begins. If you are using your phone browser, the “Add to Home Screen” option creates an icon on your phone and eliminates the menu bar, making a clean interface.

The garage door interface
It’s… pretty simple

The scripts provide a simple web interface that is responsive (it automatically adjusts to the screen where it is being rendered), so it works well on a phone web browser or whatever other web-capable device you want to use to open your garage..

There is a single “Garage Door Button” and pressing it… that’s right… it does the same thing as if you pressed the button connected to your garage door opener.

Of course you can connect the relay to whatever else you want to control… lights, refrigerators, bug zappers, sprinklers, your toaster.

Security Concerns

The HoT Garage “app” on my home screen.

If you are silly enough to follow in my path, I strongly suggest you only run this on a local home network (e.g. you must be connected to your home wifi) if you are using it on something like a garage door, partially because I didn’t consider security at all when writing the scripts, and more importantly, why in the hell would you want to open your garage door when you are not near your garage door? I know it sounds cool, but… no.

Happy Tinkering!

If you have a habit of wiring things up to teh Interwebs, I’d love to hear about your experiences… especially the ones that didn’t work out exactly as planned. Please leave a reply, below!

Hinder, Don’t Halt: Griefing Content Thieves for Fun and Profit

The art of deterring content theft is an ongoing game of cat and mouse – generally any barrier you create to prevent theft is temporary, as thieves continue to find new ways to steal the content, so long as the value of the content exceeds the effort necessary to steal it. For this reason, it can often be more effective to hinder thieves instead of trying to stop them.

I encounter this “hinder don’t halt” pattern with others that run large services, and you can see this reflected in solutions like shadow banning. One of the most common themes I hear is the satisfaction that comes from solutions that cause frustration for bad actors, so I’m sharing one from my personal experiences…

At IMVU, customers called Creators make content that they sell to other IMVU customers. The content they create is 3D items like avatar clothing, items to decorate an environment, and ways to customize an avatar. This content creates real value for other IMVU customers, who spend real money to purchase it from the catalog of over 10 million items. While many Creators create content just for the enjoyment of creating, some do it as a business, with a few making over $100K US annually. Whether creating for pleasure or business, all Creators hated having their work stolen. And, since there is real money from the sales of content, there is real incentive for thieves to try to steal it.

At one point we discovered a site that was selling a service that would allow people to steal Creator content without paying for it. It was pretty easy to detect the service and the initial response was blocking them, which immediately broke their service completely and, not surprisingly, made the thieves quickly respond by finding a new way around the block. The block lasted less than a day and the thieves were back in business.

The next response was more fun… rather than blocking the thieves, we made their service not work… sometimes… and inconsistently. Code was added to detect thieves accessing content and randomly some content being accessed would be mildly corrupted. The corruption could be configured to occur at certain rates, on certain items, at certain times of day, and be disabled based on what appeared to be testing for the corruption. As a result, customers of the thieves started getting inconsistent results, that would sometimes lead to content failing to load and even crashes. If you are an engineer reading this, you understand why this is a nightmare scenario to debug and fix… customers are reporting different failure cases with no consistent way of reproducing the problem to understand the cause. And, since your code is working fine, the bug isn’t going to be found… you eventually have to discover that you are being served different content than is being served to legitimate customers.

The result of hindering was much more effective than blocking… it took many weeks for the thieves to understand what was happening and, during this time, we could see them getting bashed by the people that paid them because the stolen content was ruining their experience. By the time the thieves had found another solution, they had such a bad reputation that people were less willing to give them money.

If you have dealt with content thieves I would be interested in hearing your stories, successful or not. Please leave a reply, below!

Credits
Cat and mouse chase image by Jeroen Moes
Dungeons & Dragons dice by Lydia

Rewards from Talking to Customers

Most people that build products or run companies have heard the mantra, “get out of the building – talk to customers.” It is easy to assume that talking to customers is only about building a better product. Talking to customers will help you build a better product, but more importantly, you may be rewarded by learning how your work changes people’s lives!

I recently had an experience that was so delightful I had to share it with my former employees, and they decided to share it with their millions of customers. Below is the excerpt from the IMVU blog:

You may remember a very familiar face in the photo featured in this story.  Brett Durrett is and always will be a friend of IMVU, even after his 11 years on staff and nearly 5 years as our CEO. Beyond his professional titles, or even his leadership as CEO, Brett was an active user that frequently went into chatrooms to join the conversation, answer questions, solve issues, or simply say hello. On Fridays at the HQ office, it was common to see Brett speaking from a microphone about the week’s accomplishments, and always finishing with words of inspiration, a story of encouragement, or a new product to be excited about.  Even if we didn’t hear your stories, Brett always told us your stories so that we could remember why we work at IMVU: we are here to spread the power of friendship, to help people find friends, to encourage them to express themselves, and to find an outlet for creative expression.Recently, our current Chief Operating Officer Kevin Henshaw, forwarded an email he received from Brett to the entire company about how IMVU continues to work its magic on and off our product. 

Brett’s email read like this:

On Monday I was wandering around New Orleans wearing my IMVU hoodie, as I am one to do. I went into a coffee shop and the woman at the counter asked me how I got my hoodie, to which I replied, “I used to work for IMVU”. Her eyes lit up as she proceeded to tell me how much IMVU meant to her as she was growing up.

Bea told me she used IMVU because it allowed her to connect with people without any stereotypes about who she was – she got to decide how she wanted to be seen. She also loved that it didn’t cost much to experience a fantasy lifestyle. She had a lot of friends on IMVU that felt the same. She really gushed about how important IMVU had been in her life. Her excitement went on for minutes. My traveling companion was taken aback, as I seemed to have rock star status. It was a chilly day in NOLA, but I gave Bea my IMVU hoodie (she had made me feel so warm inside that I really didn’t need it).

If you’ve talked to enough IMVU customers you know that Bea’s story isn’t unique… IMVU has helped people find their life partners, best friends, and caring families.

I thought I would use my chance encounter as an excuse to reach out to IMVU employees, say “hello”, and remind them that there are a lot of silly things than can happen on IMVU, but don’t lose sight of the really meaningful things as well! Bea’s story is a testament to what this is really about – helping people find new friends and creating something meaningful to benefit their lives. On behalf of Bea, myself, and millions of customers, keep up the great work!

Do you have a delightful customer story? I’d love to hear about it… please leave a reply!

Scaling Continuous Delivery: Happiness as a Metric

A few days ago Jeff Atwood (Coding Horror) suggested a good measure of a tech company’s health is the time it takes to have a simple change become available to customers:

And while there are numerous metrics that determine the health of a tech company (see Jez Humble’s book, Accelerate, for an amazingly comprehensive overview), Continuous Delivery strongly correlates to successful outcomes.

I support Jeff’s assertion – I witnessed the value created by Continuous Delivery at IMVU, where we pioneered some of the crazy processes that would be followed by more sane practitioners. From day one, IMVU placed value on the speed of product iteration and “designed” build systems accordingly. In 2006, development was done in Windows using Reactor Server to provide the LAMP-ish stack, and the deploy process looked something like this:

 svn-server$ rcp website/* production:/var/www/

If you’re wondering why I omitted the test framework, I didn’t. Code went from a local Windows sandbox environment, to source control, to live on a Linux environment running a version of PHP different than the local sandbox. Fun! While there were numerous problems with this system, iteration velocity was amazing (those 1 word copy changes could ship in less than 5 minutes, and so could full features). This development velocity was a key component enabling IMVU to build a large, successful business in a space where the failed companies outnumber survivors 50 to 1.

And to be clear, time to get a change live to customers doesn’t in itself indicate healthy tech, but a lot of tech health comes from the corresponding systems necessary to make rapid deployment work.

Fast forward a few years to 2008 and I transition from leading the operations team to leading the engineering organization, where the build and deploy systems had matured, with reasonable test coverage, and automated deployment, with automated rollbacks when something unfortunate made it into production. It was pretty cool, even though publicly the process was mostly received with the sentiment, “that will never work , and certainly won’t scale”.

Scaling Problems

One of my first challenges as the new engineering leader was a team unhappy about their ability to get work done because it was taking hours for a commit to become live to customers. It’s astonishing when you think about it – every engineer in the company had come from companies where the commit to live process was typically measured in months, but once they experienced the value of Continuous Delivery, anything more than minutes seemed unbearable.

Digging into the problem, I came to understand that the problem was not slow builds (although that was part of it), the most significant issues were caused from the shared responsibility for build systems, combined with the desire to deliver features to customers, created a tragedy of the commons. When an engineer had a failure in the build system, the optimal solution for that engineer was to fix the problem in place, blocking the build system for anybody else in the queue, which meant the number of commits in the next build increased, which meant the chance of a failure in the that build increased, ad infinitum. The result was pushing to production could be blocked for hours, sometimes most of the work day.

Solving for Happiness

I thought the best solution was to formalize a project, have a clear success outcome, and have a single person with the responsibility for (and therefore authority over) the build / deploy systems. The first problem was determining a clear success criteria… anything time metrics I chose would be somewhat arbitrary, so instead I chose engineering happiness as the success criteria, or more specifically, pushing to production was no longer causing unhappiness. While I generally hate subjective success criteria, there were ways to assess progress through 1:1 conversations and Likert scale surveys. We also had great (highly objective) data around commit to deploy times, so we could see the correlation to the more subjective happiness index.

There was some pretty straightforward work to improve the actual test and deploy speeds, including simple things like adding more hardware and the slightly less simple sorting tests to run by speed (a surprisingly large performance gain), and fixing the slowest of the tests. But some of the most important gains came from the human parts of the deployment system… engineers were required to immediately revert code and fix the issue in their sandbox rather than blocking the build system. This was not a popular policy change as immediately engineers experienced the direct impact from a failed commit, but didn’t immediately see any gains to the overall system.  But after a few weeks the improvements were clear in the average commit to deploy time. And giving credit where it is due, Eric Prestemon was the “Buildbot Sheriff” that identified so many of the opportunities for improvement and delivered the results… many people helped, but Eric had the burden of hearing a lot of critical feedback about unpopular policy changes (eventually outweighed by the praise for the results he produced).

Eventually the build system frustration ceased being a common topic in 1:1 meetings, and it faded away as a meaningful problem in engineering surveys. 12 minutes. When the commit to live time is 12 minutes, this system is operating well. That became the new value for alerting – under 12 minutes, all is good, after that we need to actively drive improvements. In practice, deploy time was usually around 11 minutes, 8 for parallel test builds/runs and 3 minutes for rollout checks (thanks for the reminder, @jwatte).

Diminishing Returns

I have been asked why we didn’t try to make the build and deploy systems as fast as possible… why not 2 minutes? We constantly worked on optimizing these systems, adding separate hypothesis builds, automatically isolating build servers to allow diagnosing and fixing without blocking, etc. And sometimes deployment would take less than 9 minutes.

However, much like the difference between 99.99% and 99.999% uptime for a service, the difference to the customer can be negligible while the resources necessary to deliver that improvement can be extraordinary. When business requirements are being met and engineering is happy with deploy times, the resources necessary to dramatically improve were better spent delivering value to customers.

Key Takeaways

  1. Working in a (well functioning) Continuous Delivery environment is empowering, naturally encourages other strong technical practices, and is hard to retreat from once experienced.
  2. Certain problems fall into what I call the “roommates and dishes” category, where “it’s everybody’s responsibility” sounds good, but in practice actually means “it’s nobody’s responsibility”. In these cases it is better to find a results-driven person and ensure they have responsibility and corresponding authority.
  3. Hire Eric Prestemon or somebody like him.

 

Have you worked in a Continuous Delivery environment and experienced non-obvious scaling challenges? I’d like to hear about your experience – please leave a comment!

Q&A on Digital Transformation

In August I presented The Challenges of Executing Lean Startup at Scale, generously hosted by Rangle.io in Toronto, Canada. Rangle is the premier digital transformation consultancy, founded on Lean Startup principles and achieving impressive growth – a really great success story. I spent some time with Nick Van Weerdenburg, Rangle’s CEO, discussing Digital Transformation.

Some of the topics covered in the conversation include:

  • Solving customer problems is more important than rigorously following a process
  • The challenges of being on an agile team while working with or being part of a non-agile organization
  • Successful agile transformation requiring a culture change before a toolset change… most organizations get this backwards
  • How to choose metrics that are meaningful to your business

I hope you enjoy the video:

If you watch the video I would love your feedback! Please leave a comment below telling me what you think I got it right and what you think sounds crazy.