For the 3 people that have been reading my posts, you know my journey from ESP8266 hacking off the shelf smart switches to creating custom devices, most of which I connect to Home Assistant so that smart devices are secure and don’t rely on the existence of any particular vendor. The next step in this journey is using ESPHome to simplify creation of custom software for ESP8266/ESP32. For my first project I created ESPHome Temperature and Humidity with OLED Display.
Why use ESPHome instead of something like Arduino Studio? Simply put, its simple but powerful. As an example, I made custom software for a temperature & humidity reader for my bathroom that was 8 custom lines of code (which wasn’t really code, but configuration). YAML configuration files provide access to a ton of sensors, and if you need to dig in deeper with more complicated functionality, ESPHome provides Lambdas that allow you to break into custom C++ at any time.
One of the other cool things about ESPHome is, while it integrates seamlessly with Home Assistant, the devices are meant to run independently, so if a device is out of range or has no network connection, it still performs its functions.
And I wanted to learn something new, so…
Why This Sensor?
I have a few temperature sensors around the house and I also keep one in the camper van, mostly out of curiosity so I can compare it with the in-house temperatures using graphs on Home Assistant. However, I realized that when I went camping I wanted access to the temperature, but couldn’t do so without being connected at home. The old version of the van thermometer was based on an ESP8266 Garage Door opener (I never shared that posting, so sorry, or you’re welcome) and I didn’t want to update it with a display screen, as I really don’t need that for a garage door (yes, I realize something not being needed usually doesn’t stop me from building it). I decided that I might as well take the opportunity to use ESPHome since it was simple and worked offline.
It’s Time to Build
I won’t go into the details of setting up Home Assistant, but if you are into home automation at all, it is super awesome. Installing the ESPHome add-on gives enables the flashing and managing of ESP8266/ESP32 boards, and they automatically connect to Home Assistant at that point.
For the lazy, ESPHome is generally a few clicks and a little copy pasta. For this project, it’s no different, with the step-by-step details available on my Github project page.
For the hardware, I used the very tiny ESP-12F, DHT11 sensors, this OLED display, and a few miscellaneous components. All of these fit pretty nicely in the case, although it is a pretty snug fit, so I highly recommend a very thin wire, and this 30 Gauge silicone wire was super great for the job. Exact components, wiring diagram, and the STL files to 3D print the case are on my Github project page.
When assembling, removing the pins from the DHT11 board and soldering the wires directly to the board will allow it to be flush against the case, giving better exposure to the sensor and fitting better. The DHT11 is also separated from the ESP8266 board as I was hoping to create some insulation from any heat from the board impacting the temperature reading (not sure that work). There is a hole between the chambers to thread the wires, and I suggest a bit of hot glue to block air flow once wired.
As you can see in the photo, I used generous dollops of hot glue to hold the components in place… it isn’t pretty, but nobody will ever see it (well, unless you photograph it and blog about it, in which case still, probably nobody will see it). I sealed the case with Zap-A-Gap glue, as I found original Super Glue was way less effective.
Plug it in.
Okay, well… it’s a little more than that. The screen will alternate between showing the temperature and humidity in text format and a 24 hour graph that is a bit useless for now since I am not sure how to get the historical values to label the graph. The button will toggle screen saver mode on / off, and the screen saver activates automatically after a short amount of time.
If you want to get a little fancier, I have this sensor in the bathroom and it will turn the vent fan on (via Home Assistant) when the humidity reaches a certain level… it’s useful for anything where you want to control devices based on temperature or humidity, and even more useful if someone might want to see the temperature.
Are you doing home automation with ESPHome? Do you have suggestions or requests? Did you actually read this blog post for some reason? I want to know… please leave a comment, below!
I made a few hardware adjustments from the original build, and I am using the much smaller NodeMcu ESP-12F for the board, and this SSD1306 0.96 Inch OLED I2C Display Module, which features a full text row in yellow and the rest of the screen in blue. Finally, I added a button to the project, because everything needs a button (I happened to use these).
The case isn’t fancy, but it works… the final build is approximately 1.25″ x 1.5″ x 1″ (width x depth x height), or 3.5cm x 4cm x 2.25cm for most of the planet. One challenge with the size is everything is extremely tight in the case, so I recommend using very thin wire in your build. There are two 3D models, the base and the cover, assembly is just cramming everything in and gluing the case together. I was using relatively thick wires and everything is so compressed I didn’t even need to glue the boards in place, but it would probably be a good idea to use a little hot glue to secure things inside.
I really like the two color display, which was more of an accident than anything as one of the original all blue displays seemed to go bad and was very dim, so I ordered replacements.
With the addition of the button, I was able to add some new functionality. A quick press of the button will toggle the screen off and pause network updates, effectively a screen saver sleep mode. Another quick press turns the screen back on. A long press will enter menu mode. While in menu mode, a short press will iterate through the menu items, and a long press will select the menu item.
All of the source code, 3D models, and wiring instructions can be found on my Github page bdurrett/BART-watcher-ESP8266. The project still needs work to be generic enough to be useful to anyone, but since this works for me (and I don’t think anyone else actually needs this), I will likely stop further development. That said, if anyone wants to contribute, I’d be happy to collaborate with anyone!
I frequently commute on BART and wanted a convenient way to tell when I should start walking to the train station, ideally something that is always accessible at a glance. A gloomy Saturday morning provided some time to hack together something…
I had already purchased a ~1 inch SSD1306 display and had extra ESP8266 boards laying around, so I figured I would start there. Wiring the screen to the board is super simple, just 4 wires (see diagram).
From there is was pretty simple to pull the train real-time data using BART Legacy API. BART also offers modern GTFS Schedules, which is the preferred way to access the data, but from what I could tell, this would make the project significantly more difficult given some of the limitations of the ESP8266. So, I went the lazy route.
Coding was pretty simple, most of the time was spent rearranging the elements on the screen. Well, actually most of the time was spent switching from the original JSON and display libraries I chose as I wasn’t happy with them.
There’s a lot to fit into a 1-inch display, but I got what I needed. The top line shows the date / time the station information was collected, and the departing station. The following lines are trains that are usable for the destination station, with the preferred (direct) lines in large type, and less-preferred (transfer needed) in small type. Finally, if a train is within the sweet spot where it isn’t departing too soon to make it to the station but I also won’t be waiting at the station too long, the departure time is highlighted. Numbers are minutes until departure, “L” means leaving now, and “X” means the trains was cancelled (somewhat frequent these days).
In the example above, these are trains leaving Embarcadero for the North Berkeley station (not shown), Antioch and Pittsburg/Bay Point lines require a transfer, Richmond line is direct.
At some point I would like to use a smaller version of the ESP8266 board and 3D print a case to make it a little nicer sitting on a desk, but then again, there’s almost zero chance I’ll get around to it. If anyone is into 3D printing design and wants to contribute, I’ll give you all of the parts needed.
The code / project details are available on my GitHub page at BART-watcher-ESP8266, feel free to snoop, contribute, or steal whatever is useful to you.
Do you have suggestions for features you think would be cool, want to remind me that I waste a lot of time, or maybe you even want one of these things? Please leave a comment, below!
I’ve been incredibly impressed with AI image generation, in how far it has come and how quickly advances are being made. It is one of those things that 5 years ago I would have been pretty confident that it wouldn’t be possible and now there seems to be significant breakthroughs almost weekly. If you don’t know much about AI image generation, hopefully this post will help you understand why it is so impressive and likely to cause huge disruptions in design.
As a quick background, AI image generation takes a written description and turns it into an image. The AI doesn’t find an image, it creates one based on being trained looking at millions of images and effectively learning patterns that fit the description. The results can be anywhere from photo-realistic to fantasy artwork or classical painting styles… almost anything. There are several projects available for pretty much anyone to try for free, including Midjourney, Stable Diffusion, and DALL-E 2. I am using DiffusionBee that runs from my MacBook, even without a network connection (i.e. it isn’t cheating and pulling images off of the Internet). Oh, and image generation is fast…. from a few seconds to about a minute.
If it isn’t obvious why this is amazing and likely to be massively disruptive, imagine you need a movie poster, blog images, magazine photos, book illustrations, a logo, or anything else that used to require a human to design. The computer is getting pretty good and can generate thousands of options in the time it takes a human to generate one. It can actually be faster to generate a brand new, totally unique image rather that search the web or look through stock photography options. For example, the robot image featured at the top of this posting was generated on Midjourney with about 5 minutes of effort.
As a practical example, recently Sticker Mule had one of their great 50 stickers for $19 deals and I wanted to create a version of the Banksy robot I use for my blog header, but I wanted a new style, something unique to me. However, I am not an artist so coming up with anything that would look good was unlikely. Then I remembered my new friends the robot overlords, and thought I would see if they could help me.
One of the cool things about AI image generation is you can seed it with an image to help give some structure to the end result. The first thing I did is take the original Banksy robot, remove the background and spray paint from the hand, and fix the feet a bit. This didn’t need to be perfect of even good, as it is just used by the AI for inspiration, for lack of a better word.
I loaded up DiffusionBee, included my robot image and simply asked it to render hundreds of images with the prompt “robot sticker”. And then I ate dinner. When I came back to my computer, I had a large gallery of images that were what I wanted… inspired by the original but all very different. Importantly, they looked like stickers!
If you look through the gallery, above, you can see the robot stickers have similarity in structure to the original, but there is huge variation on many dimensions. In some cases legs are segmented, sometimes solid metal. Heads can be square, round, or even… I’m not sure what shape it is. The drawing style varies from child-like art to abstract. And again, these are all new, unique images created by AI.
The biggest challenge I had was trying to pick just one… so many were very cool. I finally decided and it took about another 5 minutes to clean up the image a little to prepare it for stickerification.
When I looked at my contact sheet, my army of robots, it reminded me of my early days developing video games… if I had wanted to make a game where each player gets a unique avatar, it would have taken months to have somebody create these images. Today it takes hours.
I mentioned that things are progressing quickly… in the last few months we’ve gone from decent images to beautiful images to generating HD video from text prompts. It isn’t hard to imagine that, in the not-too-distant future, it will be possible to create a full length feature film from the comfort of your living room, and that the quality will be comparable to modern Hollywood blockbusters. The next Steven Spielberg or Quentin Tarantino won’t be gated by needing $100 million in backing to make their film, the barriers will be significantly smaller. AI has the potential to eliminate some creative professions, but it also has the ability to unlock opportunities for many others.
What are your thoughts? Is AI image generation an empowering technology that will democratize creative expression, a horrible development that will put designers out of work, or do you just welcome our new robot overlords? Leave a comment, below!
If you followed my previous posts on hacking IoT (Internet of Thing) devices to make a more secure and sustainable smart home, you may have the perception that this is an overly complicated process that no sane person would pursue. You’re not wrong, and over the last year I’ve had several failed attempts at hacking devices for various reasons from the casing requiring a saw to the programming pins being inaccessible. However, I discovered CloudFree, love their products, and think they provide a simple solution for making a smart home that is truly under your control.
Note: This is not a paid posting, I am receiving no compensation, goods, or services in exchange, and have no ownership interest in CloudFree – I am simply a happy customer.
As a quick reminder, most IoT devices require an external Internet connection to function. The problem with this is it is less secure, as a random company, often in another country (frequently China) is controlling and updating the software, as well as harvesting data. Also, if the company goes out of business, this often means your device ceases to function.
I stumbled upon CloudFree as I was looking for an alternative for the Amazon Smart Plug, which is great, but could only be controlled by Alexa, and I wanted something that could also be controlled by Google Nest, Home Assistant, a web interface, or pretty much anything. As implied by their name, CloudFree sells devices that do not require an Internet connection, emphasizing user ownership and control. This sounded perfect.
Dipping my Toes in: the CloudFree Smart Plug
In addition to selling third-party devices, CloudFree was manufacturing their own Smart Plug, which had a similar form factor and came pre-installed with Tasmota, an open source UI. And, at $13 it was a good deal. I ordered two and about a week later they arrived.
Setup was super simple… plug it in, connect to the temporary wifi it creates and configure it to connect to your home wifi. You can also setup passwords and things like MQTT. It took about 3 minutes and the switch had both a web interface and was fully connected to Home Assistant, making it accessible by Alexa as well.
The other details were nice, too… the packaging is simple paper and thin cardboard, and the actual device looks good and seems to have quality consistent with nicer devices I’ve seen. Oh, and it has a lot of functionality for things like tracking power consumption. I ended up ordering three more, which took a few weeks to arrive due to them being backordered.
Even Deeper: CloudFree Smart Bulb
I needed EVEN MOAR switches, and I decided to try the one other product CloudFree makes, their CloudFree Smart Bulb. This is a pretty basic 10W LED bulb that also allows you to control the color, coolness, and brightness, again with the super easy setup and Tasmota UI. I’ve just started playing with it, so I can’t give much of a review, but it seems well-made and does exactly what I was expecting. It reads “indoor use only”, but I am tempted to try it in my enclosed light post and change the color for holidays, events, or maybe an alarm. This shipment was relatively quick, switches and bulbs arriving in about a week, and $15 for the bulb seemed like a good price.
CloudFree Wish List
I am really happy with CloudFree overall – it is a great resource for finding user controlled smart home devices. They have a bunch of third-party sensors, plugs, and gadgets that all are user controlled, no Internet needed. If I could change anything, it would simply be adding more products, ideally made by CloudFree. Specifically, I would love to find a well made light switch (ideally with a dimmer), or the holy grail, a 3-way dimming light switch. But, for what they have right now, they are great and I recommend them for anyone looking for a simple way to add to their smart home.
Have you found other great sources of secure, sustainable IoT devices? I’d love to know about them – please leave a comment below!
On January 8, 2021 Twitter permanently suspended Donald Trump’s account, joining Facebook, Instagram, and Twitch in the censorship of the President. Many prominent voices stated this is a dangerous encroachment on freedom of speech, sometimes making comparisons to China’s government censoring the people. Having operated communities of millions of users, I believe Twitter’s biggest failure was not applying its rules consistently to all users, enabling abuses to increase in magnitude and eventually requiring the drastic response of a permanent suspension. Further, a social platform that does not censor, where complete freedom of speech is guaranteed, is an idealistic vision, but would have questionable viability and is likely unwanted in practice.
I’ll start with the basics, First Amendment rights to freedom of speech prohibits the government from limiting this speech, it does not require citizens or companies to provide the same freedom. When a person or company shuts down discussion from someone on their property or platform, that person or company is exercising their freedom of speech. For the most part, nobody has an obligation to let someone else use their property so that the other person can exercise freedom of speech.
But just because companies have the right to censor people, should they? This is a more complicated question. In theory, I want unlimited free speech, a world in which censorship doesn’t happen, because inevitably those in power, the censor, now controls access to ideas and information and will likely support their preferred narrative. In practice, I’ve learned that lack of moderation will likely destroy a platform, and moderation (a softer way to say “censorship”) is actually desired by communities, both online and in society in general.
Moderation is Necessary
Many platforms on the Internet start open and free and eventually become moderated, and a strong driver for that moderation is the abuse of the open platform destroys the value for others. Email started off great, with an inbox filled with relevant communications and eventually turned into a signal to noise ration of about 1:150, with fake Viagra and Nigerian princes rendering email nearly useless until filtering (moderation) eliminated SPAM. Message boards and social networks become unusable when SPAM and bots infiltrate, so in addition to community moderation, there is an ongoing, continually escalating battle to validate real users vs. bots. Even friendly actors can destroy a platform – when games were popular on Facebook and developers were heavily exploiting the feed for viral growth (hey, Zynga), the real social value declined as a majority of updates were about cows from your friend’s farm, and Facebook built tools to limit this game SPAM. There is always value in exploiting these open systems at the detriment of the other users, so abuse is the natural outcome.
This community desire for moderation, whether explicit or implicit, isn’t unique to online, we see it every day in society. No matter how much freedom we want for everyone, if somebody is singing in a theater during a movie, we want them to shut up or leave. We support one’s right to share their ideas, but if they are on a bullhorn outside of our house at 4:30 AM, we want them to go away. We set our own rules for private property and have laws for public property to support this moderation.
So when Twitter took action against Trump’s accounts, this was Twitter finally enforcing its policies on a user that had consistently abused the rules they established for their platform. They finally said, “like all other users, you can’t use the bullhorn at 4:30 AM either”. I am a strong supporter in our elected officials being held to the same rules that apply to regular citizens, especially since they are often the ones imposing these rules on the citizens (anyone that has been subject to a COVID shelter in place lockdown only to see their elected officials indoor dining or world traveling understands the rage-inducing hypocrisy). The editorial decision Twitter made was not the suspension of Trump’s account, it was years and years of allowing him to violate the terms they set for their platform, allowing a slow progression to eventually becoming a tool for organizing an attack on our government. It is impossible to know what would have happened if Twitter had enforced its policies consistently years ago, but generally problems are easier to manage when you address them early instead of letting them grow in magnitude and force.
Creating an Platform Without Censorship is Difficult
But won’t censoring just drive these users to build another, more powerful network, or to hidden communities where they can’t be reached? Maybe, but it isn’t that simple. A large, functional community requires the support of many companies that are effectively gatekeepers, and they have restrictions on abuses of their platforms. If you want mobile apps, you need Apple and Google’s platforms. If you decide to be web only, you still need hosting for your servers, a CDN (how content is cached and distributed at scale) and DDOS (distributed denial of service, when people kill your servers by flooding them with traffic) attack protection, companies like Microsoft, Google, Amazon, Akamai, and Cloudflare. Cloudflare is a great example of a company that has shown extreme and sometimes controversial support against censoring any site (even some pretty horrible ones), but eventually shut down protection for a site that was organizing and celebrating the massacre of people. Each of these platforms has the ability to greatly limit the viability of a service they believe is abusive, which is exactly what happened to Parler when Apple and Google determined their lack of moderation was unacceptable. There are other possible technology solutions like decentralized networks that might be able to reduce the dependency on these other platforms, but this isn’t just a technology problem.
Beyond technology requirements, what about the financial viability of a completely open platform? Monetization introduces another set of gate keepers, from payment processors, to advertisers, and legal compliance. While there will always be some level of advertiser willing to place ads anywhere (yes, dick pills for the most part), most major advertisers don’t want to be associated with content that is considered so abusive that no major platform wants the liability of supporting it. Depending on the activities on the site, banks can be prevented from providing services to the platform, and even with legal but edgy content (e.g. porn), there is a huge cut that goes to payment processors as they take a risk in providing money exchanges. Crypto can provide some options, but it is largely not understood by the average user and, depending on the content of the site, there can be legal requirements to KYC (know your customer), and liability for profiting on the utility of the site if the content is illegal. There are potential solutions for each of these, but it gets increasingly more difficult to achieve any scale.
Building on dark web is a possibility, although still vulnerable to many of the platform needs for scale. The dark web is also the worst dark alley of the Internet, difficult to discover and navigate, and the lack of moderation would mean many abuses, from honeypots (fake sites likely setup by law enforcement to have an easy way to track suspicious behavior) to scams and exploits preying on the average user that doesn’t understand the cave they’ve wandered into.
So while Trump certainly has a large base of followers and the financial resources (well, maybe) to have one of the best chances of being a catalyst for a new platform, there are many forces outside of that platform’s control that challenge its viability.
So, What’s Next?
If I had to guess, a few of the “alternative” networks will make a land grab for the users upset by the Presidential bans. The echo chamber of everyone having the same belief may not provide the dopamine response they get from a network with extreme conflict, so it may seem less interesting for the users. I also assume the environment is ripe for people to go after the next big thing, decentralized, not subject to oversight. Ultimately, societal norms will likely limit the scale and viability of these networks, and those limitations will likely be proportional to the lack of moderation.
So, all we have to do is ensure societal norms reinforce individual liberty while not enabling atrocities on humanity. It’s that simple. 😟
Using TUYA-CONVERT is preferred since it doesn’t require opening up a device or soldering, but it seems like all newer devices are using software that can’t be hacked wirelessly anymore, so you will likely need to open your smart home device. Now, let’s go void some warranties!
Gosund Smart Dimmer Switch (SW2)
This dimmer switch has a nice capacitive touch panel for changing the lighting level, so it feels a lot like adjusting something on a touch screen. Since Gosund also makes the SW1 switch I started with, I was hopeful it would be similar and I could avoid soldering… not so much.
Like the SW1, the SW2 requires a Torx T5 screwdriver to open it. Unlike the SW1, the SW2 dimmer switch has two circuitboards in it, connected by a small cable. Reading about this switch, one person claimed it could not be hacked with that cable connected – this is not true, and I bricked one of these detaching the cable… not recommended. Unfortunately, the serial connections are in the middle of the board, so the process I used with test hook clips would not work like they did on the SW1. However, the connection points are pretty big and well-labeled, so soldering wires to them is pretty easy. Once I had connections, the process was super simple to install new software, exactly like the SW1. It’s nice when things just work!
But, of course, things didn’t just work. When I installed the dimmer the dimming functionality didn’t work from the switch. Looking at the Tasmota template details for the Gosund SW2 Dimmer, this switch requires extra scripting to function properly. However, scripting is not available in the basic Tasmota software, so it needed a different version. Fortunately, once you have Tasmota installed, switching the software is easy and only requires a web browser, selecting “Firmware Upgrade” from the web interface. Unless it isn’t so easy. Trying to install tasmota-scripting.bin from the unofficial releases failed, and first required installing the tasmota-minimal.bin to get the smallest install and then installing the compressed version of the unofficial release, tasmota-scripting.bin.gz (only the .gz version would install successfully). I used the OTA (over the air) install for the minimal software (pointed to the official OTA releases), and manually uploaded the scripting gzipped binary downloaded from unofficial experimental builds. Once installed, there are new menu options in the web interface, “Configuration” -> “Edit Script”, and simply paste and enable the script from the template page. None of this was complicated, but is also wasn’t very obvious… hopefully I can save you some trial and error.
And, the switch works great and immediately worked with Alexa (make sure emulation is set to “Hue Bridge” to enable Alexa to use the dimming functionality.
Youngzuth 2-in-1 Switch
The Youngzuth 2-in-1 Switch is actually two switches that fit into the space of a single switch. When I opened the switch (Phillips head screwdriver) and started looking around the circuitboard, I couldn’t find any connection points for the serial interface. I finally hit the point I had been dreading… needing to solder directly to the chip.
The Youngzuth 2-in-1 uses a TYWE3S package and fortunately a lot of details are available on the Tuya Developer website, so it was pretty easy to figure our the chip connections. I really hate soldering, especially on tiny components next to other tiny components, so I had a margarita to steady my hand.
Once wires were connected, installing the software was a breeze. Configuration was also easy, with an example provided in the Youngzuth 2-in-1 template.
Full disclosure, I have not yet installed the Youngzuth switch, as I made a rookie mistake, not realizing there is no same-feed neutral connection at the switch location. Once installed I will post an update if anything required extra work.
If you have any questions or different experiences with these devices, please leave a reply below!
If you’ve ever had a free weekend, a desire to create a more secure smart home, and questionable judgment, you’ve come to the right place. In this post I’ll talk about how to take common IoT (Internet of Things) devices and put your own software on them.
Disclaimer: depending on the device, this exercise can range from pretty easy to drink bourbon and slam your head against the desk difficult. Oh, and there is some risk of electrocuting yourself or setting your house on fire. So everything after this point is for entertainment purposes only…
Why Hack Your IoT Devices?
Most people creating a smart home take the easy path… pick out some cheap and popular devices on Amazon, install the smartphone app to configure it, and are good to do. Why would anyone want to got through the extra effort to hack the device? There are a few good reasons:
Security: With few exceptions, most smart devices require installing an app on your phone, often times from an unknown vendor and with questionable device permissions needed. The devices themselves are tiny, wifi-connected computers, and also have software that is updated by connecting to a server in some country, and installing new software on the device connected to your home network. Having a cheap device connected to your home network that requires full access the Internet to work is bad, but it is worse when that software can be changed at any time, to do whatever the person changing it wants it to do. This could turn your light switch into part of a botnet, or worse, be exploited to attack other devices on your home network. By hacking replacing the software, you create a device that works properly without ever needing access to the Internet, lowing the security risk. You can also see (and change) exactly what software the device is using.
Sustainability: Since the devices require communicating with an external company for configuration and updates, when that company stops supporting the device or worse, goes out of business and turns off their servers, your device becomes useless or stuck in its current configuration forever. By hacking replacing the software, you are able to support the device even if the company ceases to exists. And by using open source software with a robust community, you will likely have very long term support.
Because I Can (mu ha ha ha): Okay, this is more of a fun reason, but worth mentioning. I’ve generally been much happier with the hacked versions of my products, whether it be my Tivo, Wii, or car dashboard. Smart light switches are a relatively low-risk hack, as they are inexpensive, and I’m assuming the risk is turning it into a brick, not causing an electrical fire (I’ll update the blog if I have an update on that).
My adventure started with the spontaneous purchase of a Gosund Smart Light Switch. Like a gazillion IoT devices sold by name brand and random manufacturers, this switch is controlled by an ESP8266. Most of these ESP8266 devices use a turnkey software solution made by Tuya, a Chinese company powering thousands of brands from Philips to complete randos.
For security and sustainability reasons, I decided I didn’t want this switch connected to my home network, and even if I wrote complex network firewall rules to limit its access, it would need to connect to the open Internet and other devices in my house to work properly.
I did some research and found Tasmota, an open source project that replaces the software on ESP8266 or ESP8285 devices, eliminating the need for Internet access and enabling functionality that make them easier to connect to controllers like Amazon’s Alexa. The older examples required disassembling the device and soldering to hack it, which is exactly not what I wanted to do. However, more recently there was an OTA (over the air) solution that didn’t require opening a device at all, and did all of the hacking over wifi… that sounded great.
Tasmota Wifi Installation
When I tinker I like to use a computer that I can reset easily so that I don’t have to worry about an odd configuration causing problems later. I have an extra Raspberry Pi that is handy for this, and installed a clean version of the Raspberry Pi Desktop to install on an extra Micro SD card.
I installed TUYA-CONVERT, which basically creates a new wifi network that and forges the DNS (how computers translate a name like tuya.com to numbers that identify a server) to resolve to itself rather than the Tuya servers, so that when the device goes to get a software update from the mothership, it gets the Tasmota software installed instead – hacking complete.
I started running the tuya-convert script on my Raspberry Pi and, rather than go through the full process of installing the switch in the wall, I found a standard PC power cable (C13) was the perfect size to hold the wires in place or allow testing on my desk. DO NOT DO THIS – I am showing you only as an example of what a person of questionable judgment might do. The switch powered up and on the tuya-convert console I could see it connecting and trying to get the new software! I love it when things just work.
But then, it didn’t work. While there was a lot of exciting communication happening between Raspberry Pi and the switch, ultimately the install failed. Looking at the logs, I was getting a message “could not establish sslpsk socket“, and found this open issue, New PSK format #483. Apparently, newer versions of the Tuya software require a secret key from the server to do a software update, and without the key (only known by Tuya), no new software will be accepted. So, damn… these newer devices can’t use the simple OTA update. Also, if you have older devices, do not configure them with the app it comes with if you plan on hacking, as that will update them from the OTA-friendly version to requiring the secret key.
Tasmota Serial Cable Installation
I realized I was too far down the rabbit hole to give up, so it was onto the disassembly and soldering option. The Tasmota site has a pretty good overview of how to do this, although I thought a no-solder solution would be possible, and tried to find the path that requires the least effort (yay laziness).
Opening the switch required a Torx T5 screwdriver (tiny, star-shaped tool), and I happened to have one laying around from when I replaced my MacBook Pro battery. Looking at the circuit board, I realized that very tiny labels and contact points, combined with my declining eyesight, made this a challenge. I took a quick photo with my Pixel 4a and zoomed in to see what I needed… the serial connections on the side of the board (look for the tiny RX, TX, GND, and 3.3 labels… no, really, look). While soldering would be the most reliable connection, I was hoping test hook clips would do the job.
Since I was already using a Raspberry Pi, I didn’t need a USB serial adapter, as I could connect the Pi’s GPIO directly to the switch. Again, the Tasmota project has a page giving an example of connecting directly to the Pi. Whatever method you use, it is critical you connect with 3.3V, not 5V, and the higher voltage will likely fry the ESP8266. If you have a meter handy, check and double check the voltage. And, if you’re using the Raspberian OS, you may find /dev/ttyS0 is disabled… you will need to add enable_uart=0 to your /boot/config.txt file and reboot.
I connected the switch directly to the Raspberry Pi. There ware several things annoying about this, starting with each time the switch is connected to the 3.3V, it reboots the Pi. And since almost every command to the switch requires resetting its programming mode through a power cycle, that means rebooting the Pi frequently (fortunately it is a fast boot process).
The good news is, the test hook clips worked, which was a bit of a surprise. I added a connection from Pi ground to switch 00 (green wire in the photo), as that forces the switch to enter into programming mode at boot (it is okay to leave that connected during the hacking process, or you can detach it once it is in programming mode). I made sure everything was precariously balanced to add excitement and more opportunities for failure into the process. I was able to confirm that I entered programming mode and had access to the switch by esptool, a command line utility for accessing ESP82xx devices. Success! 🎉
The bad news is, other than being able to read the very basics from the switch, like the chip type, frequency, and MAC address, pretty much everything else failed. And, each successful access only worked once and then required a reboot. I was unable to upload new software to the switch. After researching a bit, the best clue I had was problems with voltage drops on homemade serial devices, and wiring directly to the Pi circuitboard seemed like it might apply. At this point I needed a drink, and went with a nice IPA.
But hey, once you’re this far down the rabbit hole, why stop? I decided to try a more traditional serial connection, using a CH340G USB to serial board.
Serial Killer Part Two
Apparently there was an issue using the Raspberry Pi directly for the serial communication as the USB to serial adapter worked perfectly. I validated the connection using esptool and then used the tasmotizer GUI, which makes it easy to backup, flash, and install new software on the switch. Many steps require rebooting the switch to proceed to the next step, but that is as simple as unplugging the USB cable and plugging it back in (even better that it isn’t triggering a reboot of the Raspberry Pi each time).
Once the new software is installed, there is one final reboot of the switch (don’t forget to disconnect the ground to 00 or else it boots back into programming mode). At this point the switch sets up a wifi network names tasmota[mac] where [mac] is part of the mac address. Connect to this network and point your browser to http://192.168.4.1 and you are able to configure your device. Set AP1 SSId and AP1 Password to your home wifi, click “save”, and a few seconds later your switch will be accessible from your home network.
After several years of waiting for Apple to release anything inspirational as a replacement for my Early 2015 MacBook Pro, a failing keyboard finally pushed me over the edge to purchasing a Dell XPS 13 Laptop. This is my initial experience moving back to Windows after 8+ years… since PC hardware options are nearly infinite, I am focusing on the experience going from macOS Catalina (10.15.6) to Windows 10. That said, so far the XPS 13 hardware seems amazing, even compared to a modern MacBook I use for work.
The initial setup with network and account was really smooth, very approachable. If anything could be better, I tend to use extremely secure passwords that are not easy to enter reliably, and before any password managers can be installed this is a manual process. I would love to see a solution that would use the camera to scan a QR code and have the password app from a phone generate the QR code (please, steal that idea everyone).
Once I made it to the desktop, I found the touchpad controls jarring… I can’t fault Windows for this, all of my desktop navigation is Mac OS muscle memory. I found various settings to ease my journey. And, getting used to the menus, and how apps are listed is a learning experience… pretty sure I’m doing it wrong.
I spent a lot of time searching because I could not believe this was the non-broken behavior… with multiple monitors, dragging a window between monitors of differing DPI is a tragedy and in some cases a strategic exercise to get the window usable on another monitor. I’m not sure how any designer got this so wrong, apparently the window does not scale to maintain the proportional size, instead switching to the new size when the window is 50%-ish onto the destination monitor.
Spotify going into giant-mode as it is moved to my external monitor.
This experience is, to say the least, jarring. If you are coming from a Mac, you are used to the window maintaining its size even when traversing monitors of varying sizes and DPI (and this is a relatively simple bit of math to make this work properly on the engineering side). The odd part is, once the window is fully transitioned to the destination monitor, it snaps to a size that matches the source monitor. In some cases it becomes nearly impossible to drag the window because the gigantic, expanded version results in a window that can’t make it 50% of the way to the destination monitor, so it needs to be resized (sometimes multiple resizes) to work.
UI Size Compatibility
This is another problem that makes me wonder how the average consumer is going to know how to make things work… some programs, even modern ones, don’t render their UI properly unless you modify settings in a Windows 95-era system dialog. For both Gimp and DaVinci Resolve the UI was unusable on install.
Gimp UI as default. This screenshot is extremely generous as it was a small window. However, the brush icons are about 2 millimeters wide. The rest of the UI is overlapping text.
The solution for this is cryptic. The user must find the application executable digging through the bin folder, and see the “Change high DPI settings” button.
Of course, I should need to set high DPI on a per-program basis….
And in this settings dialog there are additional, non-obvious options for making the UI work properly.
And, even more obvious is you should use High DPI scaling override to select “System”.
On the bright side, I was able to get these programs to render properly with a usable UI (although DaVinci Resolve is a great example of a window that is almost impossible to move to another monitor based on the extended desktop problems mentioned earlier).
Crashtastic Browser Tabs
It is possible that this is not Windows, but my initial research suggest this problem is specific to newer versions of Windows 10, at least 64-bit, and happens in (at least) Chrome and Edge browsers. Browser tabs seem to crash frequently.
After 45 minutes, five browser tabs crashed with the error code STATUS_BREAKPOINT.
Since I have read reports of this in both Chrome and Edge, it is possible this is a bug in Chromium, which they both share.
Is it Me?
I am open to the possibility I am doing something horribly wrong. Honestly, I would love for somebody to p0wn me, and let me know how I missed the obvious “don’t do absurd stuff” checkbox in the setup process. However, I am sort of handy with computers and from looking around, many people are experiencing the same issues… And even if I missed something, for a great consumer experience, this should just work.
If it seems like I’m being a little critical based on my first 48 hours, that’s because these friction points are consuming a lot of my time. I expect adjusting to different UI controls, but I don’t expect having to fix clearly broken behaviors right out of the box, using all modern software.
Otherwise, Windows 10 looks like it has caught-up and possibly surpassed MacOS in many ways. I’m looking forward to getting past the broken glass a barbed wire so I can start appreciating the rest of the experience.
If you’re a wizard with Windows and have some sorcery to solve these problems, please leave a comment and I will shout your praises.
Update August 31, 2020: I installed the 32-bit version of Chrome and it seems to have slightly reduced, but not eliminated, browser tabs crashing (super subjective observation).
Update September, 2020: I gave up and when back to a MacBook Pro. The Dell laptop went to a friend, and eventually Dell had to replace the motherboard, which seems to have solved the random failure issues (but none of the UX/UI issues, obviously). I’m loving my new MacBook Pro, even though I was probably the very last person in the world to buy an Intel MacBook since the M1 was released about 15 seconds after my purchase.
Over the last couple of days I’ve been looking at the various product announcements that came out of Google I/O 2019 and there were a couple of themes that got me pretty excited about where Google can go and how that can make pretty a positive impact on millions of people.
Creating Opportunities for People… All People
I loved the Google Lens announcements from Aparna Chennapragada because the application of the technology can make such a huge difference in people’s lives, and not just the people I typically see in wearing fleece vests and sipping cold brew coffee Silicon Valley. What was most compelling to me was the transcribing / Google Translate integration that was demonstrated, especially when combined with the processing being done on device (not cloud), and being accessible to extremely low-end ($35) devices. Visual translation was always a very cool feature and, when I was trying to figure out menus in Paris, I was happy to have the privilege of a high-end phone and data plan. Making this technology widely accessible enables breaking down barriers created by illiteracy, assisting the visually impaired, and helping human interactions in regions with language borders.
Google also announced Live Caption, where pretty much every form of video (including third party apps and live chat) can have real-time subtitles. This is also done on-device, and works offline, so it can be applied to live events, like watching a speaker at a conference. A shoutout to my friend and former colleague KR Liu for her work with Google on this project, that makes the world far more accessible to people with hearing challenges.
Also notable, Google’s Project Euphonia is making speech recognition more accessible to people with impaired speech.
Movement Towards Device vs. Cloud
The “on device” and “offline” features I mentioned (and were part of other announcements like Google Assistant improvements) are important because of the implications they have in making the technology available to everyone, and also because of the personal privacy that capability will enable.
Of course, my data, Google’s access to it, and personal privacy is a much larger, complicated conversation… for now I am going to focus on possibilities, not challenges.
For years there has been a move for all aspects of people’s lives to be captured and collected in the cloud. There are many reasons this may have been necessary, from correlating data to make it useful, raw computer processing power requirements, over-reaching policies, and business models requiring all the things to win. Once in the cloud, personal information can be used for purposes never imagined by the consumer, including detailed profiling, sharing with third parties, accidentally leaking to malicious parties, revealing personal content, and various other exploitations that can negatively impact the consumer.
As the processing stays on your device and does not require transferring data off of your device, it enables products that can still provide incredible benefits while also being respectful of customer privacy. This is exciting as there are product opportunities in areas like personal health (physical and mental) that will likely require deep trust and protection of consumer information to gain wide acceptance and benefit the most people.
Personal Assistant of My Dreams
And something I am more selfishly excited about…
For several years I wished that all of the products in Google would integrate with each other and eliminate almost every manual step I have to organizing my day. I am going to side-step the discussion about how much data a company has about an individual and say that I intentionally choose to trust my information with two companies (Google being one), because of the value I get from them. I use Google to organize most aspects of my life, from email communication to coordinating my kid’s schedules, video conferencing, travel planning, finding my way around anywhere, and almost every form of document. As a result, all the parts of Google know a lot about me. But still, when I send an email to setup a meeting, I usually need to manually add that to my calendar and then I also need to add in the travel details (I frequently take trains instead of driving)… it’s a couple of extra minutes that I could be spending on better things, or just looking at pictures of cats on the Internet.
With the progress of Google Assistant and Google Duplex, I am seeing a path where administrivia is eliminated, where email, text messages, phone calls and video conferencing can also provide inputs that guide this assistant into organizing my life behind the scenes… Action items discussed in a Hangout can automatically result in a summary document, a coordinated follow-up lunch, optimal travel details, and a task list.
There is an obvious contradiction between my excitement for the announcements that emphasize better human outcomes and my “let Google know all the things” excitement over a personal assistant, but again, this is about my personal, intentional choice to share data vs. products that mandate supplying personal data, often far in excess of what is necessary to deliver the product or service.
There were some other “that’s cool” announcements, and I’ll probably be buying a Pixel 3a, which seems like a great deal for the feature set, but overall I’m more excited about the direction than the specific products showcased.