The magic of NotebookLM (and my first podcast)

One of the more practical and super easy to use AI tools I’ve seen is NotebookLM, where one can add documents, text, websites, and more to a notebook and then tell AI to do something with all of the content. I’ve found this incredibly handy for things like adding a link to a keg distributor and telling it “create a list of all beers categorized by style of beer and sorted by ABV” (don’t judge). This is ~30 minutes of work accomplished in less than a minute.

One of the features of NotebookLM is the ability to make a podcast out of the content. I did this for the first time and… holy crap, it’s amazeballs.

One of my more popular (and plagiarized) postings is Causes of Backpacking and Hiking Deaths, so I decided to try the “make a podcast” feature. I’m… blown away. First, I encourage you to read the original post, and then listen to the NotebookLM created podcast:

To be clear, I gave no other direction… I added the website and clicked the “make podcast” button.

So, the obvious thing would be “sure, Brett… it’s taking your content and summarizing”. But that’s the magic, it’s not! So much of the commentary comes across as observations that are not mentioned in the original post. For example, my post mentions that hiking is more dangerous than skydiving, but the podcast makes the observation it is because we feel more comfortable in that unsafe situation. AI also picks up that this is obviously a humorous take on the dangers.

The AI podcast also extrapolates what was in the blog post, where I mentioned the danger of temperatures (hot and cold), and the podcast adds the phenomenon “paradoxical undressing” (never mentioned in my post), and the fact that with heat stroke you may not feel thirsty.

In talking about health issues, my post mentions the problems, the podcast comes up with ways to prepare for this possible risk. And while I mention the danger of too much water (drowning), the podcast highlights the risk of flash floods and canyons. None of that was in my post.

Following the summary of my post, the AI podcast hosts go into a dialog about how to protect yourself from all of these dangers. These include space blankets, lightning avoidance, and proper water treatment. None of that was in my post.

I’ve heard a few other of these podcasts where people take a ton of content and make an easy to consume podcast from it, and I have consistently been impressed. But even if you don’t need a podcast. I encourage you to take a look at NotebookLM and see if it can help you eliminate some of the more mundane tasks in your life… summarizing long documents, reformatting data from a website, or identifying insights.

For example. I uploaded the Unabomber Manifesto (again, don’t judge) and asked, ” show examples of contradictions in this manifesto”. Some of the insights included:

  • In the introduction, the manifesto states, “The Industrial Revolution and its consequences have been a disaster for the human race.” However, the author also acknowledges that the Industrial Revolution has increased the life expectancy of those in “advanced” countries.
  • The author acknowledges that “primitive man is physically less secure than modern man,” but argues that modern man suffers from more insecurity because he lacks control over his environment. But, the author also argues that freedom, which the author defines as having “power…to control the circumstances of one’s own life,” will be increased if “the power of the INDUSTRIAL SYSTEM [is] broken.” It is unclear how the power of individuals can increase if the industrial system, which has given humans some power over nature, is destroyed.

And while these are not the deepest insights, they took less than 30 seconds of my time to surface from a relatively large document.

I highly recommend NotebookLM to everyone, especially anyone that doesn’t have much experience with AI. It’s really pretty simple to use and can make a lot of long, mundane tasks incredibly quick and simple.

Have you found a great use for NotebookLM or some other cool AI tool that is easy for anyone to start using? I’d like to hear about it, please leave a comment below!

Make Yourself into an Anime Figurine with AI Image Generation

If you’ve ever wanted to see a figurine of yourself but you have no artistic talent, AI image generation can make that dream come true, and you can try it for free. I jump right into the “how to” and add my boring commentary to the end of this post, so you can skip it.

Brett, in real life

I’m using DALL-E for my image generation which requires a paid subscription, but you can get free access to it through Microsoft Bing Image Creator (requires a free Microsoft account). Once you have signed in, look for the text input field next to the two buttons “Create” and “Surprise Me”. The text field is where you describe what image you want AI to generate, then you click “Create” and a few seconds (or minutes) later, up to four images will be displayed. This process is called “prompting”, which is a common way to guide AI to generate the desired output. But getting AI to do exactly what you want is a little like herding drunk cats, so crafting the prompt can take some effort and some understanding of how things work under the hood. We’ll skip that for now and just start making fun things…

Anime figurine Brett with laptop and margarita

The structure for the prompt is “Anime figurine of <my description, skin tone, eye color, hairstyle, outfit>. The figurine is displayed inside a box with <text on box> and logo for the box, allowing visibility of the figure, typography, 3D render”. To make something that looks sort of like me, I used “Anime figurine of a shaved head, bald on top, nerd, white skin tone, dark gray hair, blue eye color, brown short beard, brown eyebrows, black shirt, jeans, Converse high tops, wearing blue rimmed glasses, wearing a watch, holding a laptop and a margarita. The figurine is displayed inside a box with Brett and logo for the box, allowing visibility of the figure, typography, 3D render

AI reminding Brett of what he lost

Once you’ve tried this for yourself, you probably noticed a few things… Most obviously, somehow the AI didn’t do what you thought you told it. For example, while I prompted “bald on top“, one of my images clearly had hair, which might be the AI getting confused with the conflicting “dark gray hair” in the prompt. I have found replicating hairstyles, even bald hair styles (if… that’s a hair style?), can be challenging. I’ve yet to be able to get any consistency with hair only on the sides and back of the head. The other thing you will probably notice is the wild things that can show up in the image, especially when it comes to text generation, where AI tends to get… creative. Some of the words you use in your prompt may show up in the image, and misspelling is not uncommon.

Cheers!

There is considerable variation in the images, some looking more like the giant-headed Funko Pop figurines, and others having pretty realistic proportions. Prompting for another common outfit I wear, “Anime figurine of a shaved head, bald on top, nerd, white skin tone, dark gray hair, blue eye color, brown short beard, brown eyebrows, black shirt, tan pants, brown leather boots, wearing blue rimmed glasses, wearing a watch, holding a laptop and a pint of beer. The figurine is displayed inside a box with Brett and logo for the box, allowing visibility of the figure, typography, 3D render” created something a little more proportional.

Funko Pop Brett

So play around a little and see what you get… if anime isn’t your thing and you really love the Funko Pop style, try swapping out the prompt, “Funko style figurine of a shaved head, bald on top, nerd, white skin tone, dark gray hair, blue eye color, brown short beard, brown eyebrows, black shirt, jeans, Converse high tops, wearing blue rimmed glasses, wearing a watch, holding a laptop and a margarita. The figurine is displayed inside a box with Brett and logo for the box, allowing visibility of the figure, typography, 3D render“.

This gallery contains more examples:

Boring Commentary

A little over a year ago I wrote Robots Building Robots: AI Image Generation, where I used my laptop for AI image generation, meaning I had to use substantially less powerful AI models than are available in the cloud, where processing power and memory can be massive. The less powerful model was fine for the specific application I had in mind (a cartoon-like sketch of a robot for a sticker), but a few people commented that the quality of the AI images was average, and some were skeptical about AI’s capability.

In that same post, I mentioned Midjourney, which at the time version 4 was just coming out and already looking pretty amazing. In the 14 months since then, the quality and capability has continued to improve at an astonishing pace. For a detailed look at Midjourney specifically, check out this post from Yubin Ma at AiTuts. In less than two years, this model has gone from distorted human faces (some almost unrecognizable) to photo realism.

Female knight generated by Midjourney, V1 (Feb 2022), V4 (Nov 2022), V6 (Dec 2023), images from AiTuts
Vintage photo of girl smoking generated by Midjourney, V1 (Feb 2022), V4 (Nov 2022), V6 (Dec 2023), images from AiTuts

I have been surprised by both the rate at which the quality and the versatility of AI generated images has increased, with the anime figurines being one of the more recent (and delightful) examples of something AI can create unexpectedly well. I’m limiting this post to still image generation, but the same is happening for music, video, and even writing code (my last three hobby programming projects were largely created by AI). It’s reasonable to assume that AI will make substantial improvements to generating 3D image files, so soon you’ll be able to 3D print your cool little anime figurine.

There are, of course, significant implications of having computers provide a practical alternative to work that used to require humans, and much like the disappearance of travel agents once the Internet democratized access to booking travel, we should expect to see a dramatic reduction in demand for human labor, and this will be disruptive and upsetting… some professions will be nearly eliminated. I don’t want to be dismissive about the human impact of more powerful automation.

At the same time, AI can empower people, and create entirely new opportunities. Large language models (LLM) create the opportunity for customized learning, where eventually individuals all across the planet can have a dialog with an AI teacher, navigating millions of human years of knowledge. More and more, people will not be limited by their resources, they will only be limited by their ideas… The average person will be able to build a website, or a phone app by describing what they want, and someone considering themselves as “not artistic” will be able to create songs, artwork, or even movies that will eventually be box office quality. AI will also likely play a significant role in things like medical advances and energy efficiency, things we generally consider good for humans.

Did you enjoy making yourself into an anime figurine? Did you come up with a prompt that made a super cool image? Did you figure out how to get my male pattern baldness accurate on the figurine? This my hot take on being optimistic about AI is horrible? Leave a comment, below!

Very Basic SunPower and Home Assistant, no HACS

In my continuing adventures trying to add better monitoring to my SunPower solar farm, I took a slightly different approach that what seems to be the common path on the Home Assistant Options for Sunpower solar integration? thread, and instead used Apache for the reverse proxy, and skipped HACS altogether, using the RESTful integration. If you haven’t read it already, you may want to start with Getting Administrator Access to SunPower PVS6 with no Ethernet Port, which covers me earlier failure.

Networking

The first thing was getting the SunPower PVS6 administrative interface. Since I didn’t have easy cabling access, I used a $7 ethernet adapter and a TP-Link AC750 Wireless Portable Nano Travel Router (TL-WR902AC). There is a cheaper model of the TP-Link that would have worked just fine, but even at $39 it was less expensive than most of the lowest-end Raspberry Pi crazy-ass prices right now. Power for the TP-Link comes from the LAN4 port on the PVS6, and the ethernet connects to USB2/LAN. The TP-Link is configured in “Router Mode”, where it connects by wired ethernet to the PVS6 and creates a separate network for all devices that connect by wifi. If you do this, you will want to configure the TP-Link to use a network different than your home network (e.g. if your home network is 192.168.0.0/24, use something like 192.168.2.0/24).

TP-Link and ethernet dongle crammed in the SunPower PVS6

At this point you should be able to connect to the TP-Link wifi and test access to the administrative interface at http://172.27.152.1.

Of course, the problem now is we need to connect the home network to the SunPower network, but there is some nuance… we only want the web traffic. Very specifically, we do not want the TP-Link to connect to the network and start giving new IP addresses to our home network, which is also why you don’t just plug the ethernet from the PVS6 into your home network.

I happen to have a home file / everything else server that runs on a Raspberry Pi, and already has Apache running. That server connects to my home network via an ethernet cable, so its wifi was unused and available. I connected to the SunPower wifi (SSID “sunpowernet”):

sudo nmcli d wifi connect sunpowernet password "5ekr1tp@$$"

Finally, I need to let the server know that when the destination network is the PVS6, it needs to use the wifi connection, not the ethernet connection:

sudo ip route add 172.27.152.0/24 via 192.168.2.1 

This is a great time to mention that it would be good hygiene to setup your server to have firewall rules blocking incoming traffic from the TP-Link, other than DHCP and established connections, in case the PVS6 is ever compromised.

Reverse Proxy

While HAProxy is super awesome and you should absolutely use it if starting from scratch, I happen to have a home server that gets 5 requests per month and was already running Apache, so I wanted to do as little extra work as possible. Fortunately, Apache has a reverse proxy, and that makes this pretty easy. I setup a virtual host with the following sunpowerproxy.conf config:

<VirtualHost *:80>
        ServerName sunpowerproxy
        ServerAlias sunpowerproxy.mypersonaldomain.net
        ServerAdmin [email protected]
        
	ProxyPreserveHost On

    	ProxyPass / http://172.27.152.1/
        ProxyPassReverse / http://172.27.152.1/

        ErrorLog /var/log/apache2/sunpowerproxy-error.log
        LogLevel warn
        CustomLog /var/log/apache2/sunpowerproxy-access.log combined
</VirtualHost>

The virtual server is going to expect the HTTP request to come to a server named “sunpowerproxy” (or whatever you name it), so you’ll need to add that DNS entry pointing to the ethernet address, not the wifi address.

If you’ve done everything correctly (modules installed, site enabled) you should be able to test everything by calling the PVS6 API from a web browser, by pointing to http://sunpowerproxy.mypersonaldomain.net/cgi-bin/dl_cgi?Command=DeviceList

After a few seconds you should get a JSON blob listing all of your devices.

Home Assistant Configuration

Finally, we need Home Assistant to be able to pull the values from the proxy. The RESTful integration provides a pretty easy way to do this… here is a basic configuration to get the current power usage and overall energy, although a lot more information, including details for each individual panel, is available:

rest:
  - scan_interval: 60
    resource: http://sunpowerproxy.mypersonaldomain.net/cgi-bin/dl_cgi?Command=DeviceList
    sensor:
      - name: "Sunpower Production Power"
        unique_id: "SPPP"
        json_attributes_path: "$.[1]"
        device_class: power
        unit_of_measurement: "kW"
        state_class: "measurement"
        json_attributes:
          - "p_3phsum_kw"
        value_template: "{{ state_attr('sensor.sunpower_production_power', 'p_3phsum_kw') }}"

      - name: "Sunpower Consumption Power"
        unique_id: "SPCP"
        json_attributes_path: "$.[2]"
        device_class: power
        unit_of_measurement: "kW"
        state_class: "measurement"
        json_attributes:
          - "p_3phsum_kw"
        value_template: "{{ state_attr('sensor.sunpower_consumption_power', 'p_3phsum_kw') }}"

      - name: "Sunpower Production Energy"
        unique_id: "SPPE"
        json_attributes_path: "$.[1]"
        device_class: energy
        unit_of_measurement: "kWh"
        state_class: "total"
        json_attributes:
          - "net_ltea_3phsum_kwh"
        value_template: "{{ state_attr('sensor.sunpower_production_energy', 'net_ltea_3phsum_kwh') }}"

      - name: "Sunpower Consumption Energy"
        unique_id: "SPCE"
        json_attributes_path: "$.[2]"
        device_class: energy
        unit_of_measurement: "kWh"
        state_class: "total"
        json_attributes:
          - "net_ltea_3phsum_kwh"
        value_template: "{{ state_attr('sensor.sunpower_consumption_energy', 'net_ltea_3phsum_kwh') }}"

Now you should have the ability to add the SunPower sensors, and configure the Energy dashboard!

The Energy dashboard in Home Assistant

Now that I have this working I will probably realize that the hass-sunpower using HACS is a way better solution, but only the RESTful integration would need to change, all of the network and proxy configuration would carry over.

Finally, if you’ve made it this far, you probably realize that it would be way better if SunPower offered a reasonable API for home integrations, instead of making people take these ridiculous steps… please let your SunPower contact know!

What’s your SunPower and Home Assistant experience? If you’re following in my footsteps (yikes), how did it go? please leave a comment, below!

Getting Administrator Access to SunPower PVS6 with no Ethernet Port

Well, if you landed on this post you either have a need to cure your insomnia or you have a very specific problem. I recently decided to become a sun farmer, and went with SunPower, which is great, but they don’t offer integrations beyond their decent but limited web and mobile apps. In particular, I wanted to integrate with Home Assistant, because… well, just because.

The main solar interface from SunPower is the PVS6 (successor to the PVS5), and by connecting to an administrative interface it is possible to pull some detailed data like specific energy output and health for each panel. The good news is the PVS6 comes with two ethernet ports, one for a WAN to connect to their servers and one for a LAN that will allow access to the administrative UI, and all one needs to do is connect to said port and then… hey, WTF? My PVS6 doesn’t have either of these ethernet ports! So, yeah… evidently there is a new version of the PVS6 that does not have ethernet ports, and the primary WAN connection is via wifi.

A blurry photo of the ethernet-port-less SPV6

After digging around teh webz, it seems that the PVS6 USB ports will work with a USB to ethernet adapter, but several people reported some adapters didn’t work. Unsure if the magical solution is the adapter needs to be USB 2.0, but I found a $7 adapter on Amazon, and it just worked. I connected my laptop to the USB2/LAN port, the PVS6 assigned an address to my laptop, and browsing to http://sunpowerconsole.com/ provided a web administration interface. However, PVS6 is not within convenient ethernet wiring distance, so I dug around some more and found Dolf Starreveld’s page, which included an amazingly comprehensive doc, Monitoring a solar installation by tapping into a SunPower PVS5 or PVS6. This doc starts with the assumption you have a PSV* with an ethernet connection and want to get to wifi, and with my USB to ethernet dongle, that’s what I had, so all I needed to do was mount a Raspberry Pi in the PSV6 to act as a router / bridge to my network. But while reading his doc, I noticed a mention of a hotspot interface available for a limited time after PVS6 power-up, and a link to a SunPower doc on commissioning the PVS6 via wifi… this sounded promising.

Sure enough when I scanned for wifi connections, I found a SunPower SSID that matched my system. And since my system had been on for days, it didn’t appear that the 4-hour window applied, so great news! The formula for the SSID is “SunPower” immediately followed by characters five and six of the PVS6 serial number; immediately followed by the last three digits. The password follows a similar formula, characters three through six of the PVS6 serial number; immediately followed by the last four digits. Once connected, I had the exact same access I had when directly connected via ethernet.

But the cool stuff isn’t really in the web UI, you need to call it directly. For example:

http://sunpowerconsole.com/cgi-bin/dl_cgi?Command=DeviceList

Will show all devices and panels, with a ton of data on each. Dolf Starreveld’s document has a ton of details.

Since I don’t plan to run this from my laptop, I still need to bridge the network… several people have written about using a dedicated device like a Raspberry Pi, including Scott Gruby’s Monitoring a SunPower Solar System, where he uses a very lightweight Raspberry Pi Zero W, and then a simple haproxy setup. However, I’d like to avoid another device (especially with the current price for Raspberry devices – holy crap), and my Raspberry Pi 4 file server connects via ethernet, so I’ll likely use its wifi to connect to the PSV6 and run the proxy from there. After that I’ll configure Home Assistant and likely bore you with another posting.

And, no sooner do I get to the end of writing a post when I realize that the wifi network has vanished, so I either need to find a way around that problem or else I’m adding a router to my PSV6.

Are you doing anything interesting and hacky with your SunPower system? Do you have cool integrations with Home Assistant? Did you stay awake through this whole post?… please leave a comment, below!

ESPHome Temperature and Humidity with OLED Display

For the 3 people that have been reading my posts, you know my journey from ESP8266 hacking off the shelf smart switches to creating custom devices, most of which I connect to Home Assistant so that smart devices are secure and don’t rely on the existence of any particular vendor. The next step in this journey is using ESPHome to simplify creation of custom software for ESP8266/ESP32. For my first project I created ESPHome Temperature and Humidity with OLED Display.

Why ESPHome?

Why use ESPHome instead of something like Arduino Studio? Simply put, its simple but powerful. As an example, I made custom software for a temperature & humidity reader for my bathroom that was 8 custom lines of code (which wasn’t really code, but configuration). YAML configuration files provide access to a ton of sensors, and if you need to dig in deeper with more complicated functionality, ESPHome provides Lambdas that allow you to break into custom C++ at any time.

One of the other cool things about ESPHome is, while it integrates seamlessly with Home Assistant, the devices are meant to run independently, so if a device is out of range or has no network connection, it still performs its functions.

And I wanted to learn something new, so…

Why This Sensor?

I have a few temperature sensors around the house and I also keep one in the camper van, mostly out of curiosity so I can compare it with the in-house temperatures using graphs on Home Assistant. However, I realized that when I went camping I wanted access to the temperature, but couldn’t do so without being connected at home. The old version of the van thermometer was based on an ESP8266 Garage Door opener (I never shared that posting, so sorry, or you’re welcome) and I didn’t want to update it with a display screen, as I really don’t need that for a garage door (yes, I realize something not being needed usually doesn’t stop me from building it). I decided that I might as well take the opportunity to use ESPHome since it was simple and worked offline.

It’s Time to Build

I won’t go into the details of setting up Home Assistant, but if you are into home automation at all, it is super awesome. Installing the ESPHome add-on gives enables the flashing and managing of ESP8266/ESP32 boards, and they automatically connect to Home Assistant at that point.

For the lazy, ESPHome is generally a few clicks and a little copy pasta. For this project, it’s no different, with the step-by-step details available on my Github project page.

Wiring complete on DHT11 and ESP8266 (ESP-12F) before gluing case together

For the hardware, I used the very tiny ESP-12F, DHT11 sensors, this OLED display, and a few miscellaneous components. All of these fit pretty nicely in the case, although it is a pretty snug fit, so I highly recommend a very thin wire, and this 30 Gauge silicone wire was super great for the job. Exact components, wiring diagram, and the STL files to 3D print the case are on my Github project page.

When assembling, removing the pins from the DHT11 board and soldering the wires directly to the board will allow it to be flush against the case, giving better exposure to the sensor and fitting better. The DHT11 is also separated from the ESP8266 board as I was hoping to create some insulation from any heat from the board impacting the temperature reading (not sure that work). There is a hole between the chambers to thread the wires, and I suggest a bit of hot glue to block air flow once wired.

As you can see in the photo, I used generous dollops of hot glue to hold the components in place… it isn’t pretty, but nobody will ever see it (well, unless you photograph it and blog about it, in which case still, probably nobody will see it). I sealed the case with Zap-A-Gap glue, as I found original Super Glue was way less effective.

Instructions

Plug it in.

Okay, well… it’s a little more than that. The screen will alternate between showing the temperature and humidity in text format and a 24 hour graph that is a bit useless for now since I am not sure how to get the historical values to label the graph. The button will toggle screen saver mode on / off, and the screen saver activates automatically after a short amount of time.

If you want to get a little fancier, I have this sensor in the bathroom and it will turn the vent fan on (via Home Assistant) when the humidity reaches a certain level… it’s useful for anything where you want to control devices based on temperature or humidity, and even more useful if someone might want to see the temperature.

Are you doing home automation with ESPHome? Do you have suggestions or requests? Did you actually read this blog post for some reason? I want to know… please leave a comment, below!

3D Printed Case for ESP8266 and SSD1306 display

3D printed case for ESP8266 and SSD1306 OLED display

When it was clear that nobody was asking for or needed my BART Train Monitor with ESP8266 and SSD1306 display, I knew the obvious response was to invest more time into the project by creating a model for a 3D printed case.

3D printed case for ESP8266 and SSD1306 display with measurement and coins for size
The BART watcher 3D printed case

I made a few hardware adjustments from the original build, and I am using the much smaller NodeMcu ESP-12F for the board, and this SSD1306 0.96 Inch OLED I2C Display Module, which features a full text row in yellow and the rest of the screen in blue. Finally, I added a button to the project, because everything needs a button (I happened to use these).

The case isn’t fancy, but it works… the final build is approximately 1.25″ x 1.5″ x 1″ (width x depth x height), or 3.5cm x 4cm x 2.25cm for most of the planet. One challenge with the size is everything is extremely tight in the case, so I recommend using very thin wire in your build. There are two 3D models, the base and the cover, assembly is just cramming everything in and gluing the case together. I was using relatively thick wires and everything is so compressed I didn’t even need to glue the boards in place, but it would probably be a good idea to use a little hot glue to secure things inside.

The blue and yellow OLED display with BART schedule

I really like the two color display, which was more of an accident than anything as one of the original all blue displays seemed to go bad and was very dim, so I ordered replacements.

With the addition of the button, I was able to add some new functionality. A quick press of the button will toggle the screen off and pause network updates, effectively a screen saver sleep mode. Another quick press turns the screen back on. A long press will enter menu mode. While in menu mode, a short press will iterate through the menu items, and a long press will select the menu item.

ESP8266 BART Train Monitor menu screen
OLED display with BART Watcher menu

All of the source code, 3D models, and wiring instructions can be found on my Github page bdurrett/BART-watcher-ESP8266. The project still needs work to be generic enough to be useful to anyone, but since this works for me (and I don’t think anyone else actually needs this), I will likely stop further development. That said, if anyone wants to contribute, I’d be happy to collaborate with anyone!

BART Train Monitor with ESP8266 and SSD1306 display

I frequently commute on BART and wanted a convenient way to tell when I should start walking to the train station, ideally something that is always accessible at a glance. A gloomy Saturday morning provided some time to hack together something…

ESP8266 to SSD1306 wiring diagram

I had already purchased a ~1 inch SSD1306 display and had extra ESP8266 boards laying around, so I figured I would start there. Wiring the screen to the board is super simple, just 4 wires (see diagram).

From there is was pretty simple to pull the train real-time data using BART Legacy API. BART also offers modern GTFS Schedules, which is the preferred way to access the data, but from what I could tell, this would make the project significantly more difficult given some of the limitations of the ESP8266. So, I went the lazy route.

Coding was pretty simple, most of the time was spent rearranging the elements on the screen. Well, actually most of the time was spent switching from the original JSON and display libraries I chose as I wasn’t happy with them.

BART-watcher-ESP8266
(early layout)

There’s a lot to fit into a 1-inch display, but I got what I needed. The top line shows the date / time the station information was collected, and the departing station. The following lines are trains that are usable for the destination station, with the preferred (direct) lines in large type, and less-preferred (transfer needed) in small type. Finally, if a train is within the sweet spot where it isn’t departing too soon to make it to the station but I also won’t be waiting at the station too long, the departure time is highlighted. Numbers are minutes until departure, “L” means leaving now, and “X” means the trains was cancelled (somewhat frequent these days).

In the example above, these are trains leaving Embarcadero for the North Berkeley station (not shown), Antioch and Pittsburg/Bay Point lines require a transfer, Richmond line is direct.

At some point I would like to use a smaller version of the ESP8266 board and 3D print a case to make it a little nicer sitting on a desk, but then again, there’s almost zero chance I’ll get around to it. If anyone is into 3D printing design and wants to contribute, I’ll give you all of the parts needed.

The code / project details are available on my GitHub page at BART-watcher-ESP8266, feel free to snoop, contribute, or steal whatever is useful to you.

BART Watcher on ESP8266 and SSD1306 OLED display
The full build of BART Watcher on ESP8266 and SSD1306 OLED display

Do you have suggestions for features you think would be cool, want to remind me that I waste a lot of time, or maybe you even want one of these things? Please leave a comment, below!

Robots Building Robots: AI Image Generation

I’ve been incredibly impressed with AI image generation, in how far it has come and how quickly advances are being made. It is one of those things that 5 years ago I would have been pretty confident that it wouldn’t be possible and now there seems to be significant breakthroughs almost weekly. If you don’t know much about AI image generation, hopefully this post will help you understand why it is so impressive and likely to cause huge disruptions in design.

Midjourney AI-generated image, “beautiful woman, frolicking in water, gorgeous blonde woman, beach, detailed eyes”

As a quick background, AI image generation takes a written description and turns it into an image. The AI doesn’t find an image, it creates one based on being trained looking at millions of images and effectively learning patterns that fit the description. The results can be anywhere from photo-realistic to fantasy artwork or classical painting styles… almost anything. There are several projects available for pretty much anyone to try for free, including Midjourney, Stable Diffusion, and DALL-E 2. I am using DiffusionBee that runs from my MacBook, even without a network connection (i.e. it isn’t cheating and pulling images off of the Internet). Oh, and image generation is fast…. from a few seconds to about a minute.

If it isn’t obvious why this is amazing and likely to be massively disruptive, imagine you need a movie poster, blog images, magazine photos, book illustrations, a logo, or anything else that used to require a human to design. The computer is getting pretty good and can generate thousands of options in the time it takes a human to generate one. It can actually be faster to generate a brand new, totally unique image rather that search the web or look through stock photography options. For example, the robot image featured at the top of this posting was generated on Midjourney with about 5 minutes of effort.

As a practical example, recently Sticker Mule had one of their great 50 stickers for $19 deals and I wanted to create a version of the Banksy robot I use for my blog header, but I wanted a new style, something unique to me. However, I am not an artist so coming up with anything that would look good was unlikely. Then I remembered my new friends the robot overlords, and thought I would see if they could help me.

Quickly cleaned-up source image inspiring a robot
The Banksy robot cleaned up just a little

One of the cool things about AI image generation is you can seed it with an image to help give some structure to the end result. The first thing I did is take the original Banksy robot, remove the background and spray paint from the hand, and fix the feet a bit. This didn’t need to be perfect of even good, as it is just used by the AI for inspiration, for lack of a better word.

I loaded up DiffusionBee, included my robot image and simply asked it to render hundreds of images with the prompt “robot sticker”. And then I ate dinner. When I came back to my computer, I had a large gallery of images that were what I wanted… inspired by the original but all very different. Importantly, they looked like stickers!

If you look through the gallery, above, you can see the robot stickers have similarity in structure to the original, but there is huge variation on many dimensions. In some cases legs are segmented, sometimes solid metal. Heads can be square, round, or even… I’m not sure what shape it is. The drawing style varies from child-like art to abstract. And again, these are all new, unique images created by AI.

The winning robot, headed to Sticker Mule

The biggest challenge I had was trying to pick just one… so many were very cool. I finally decided and it took about another 5 minutes to clean up the image a little to prepare it for stickerification.

When I looked at my contact sheet, my army of robots, it reminded me of my early days developing video games… if I had wanted to make a game where each player gets a unique avatar, it would have taken months to have somebody create these images. Today it takes hours.

I mentioned that things are progressing quickly… in the last few months we’ve gone from decent images to beautiful images to generating HD video from text prompts. It isn’t hard to imagine that, in the not-too-distant future, it will be possible to create a full length feature film from the comfort of your living room, and that the quality will be comparable to modern Hollywood blockbusters. The next Steven Spielberg or Quentin Tarantino won’t be gated by needing $100 million in backing to make their film, the barriers will be significantly smaller. AI has the potential to eliminate some creative professions, but it also has the ability to unlock opportunities for many others.

What are your thoughts? Is AI image generation an empowering technology that will democratize creative expression, a horrible development that will put designers out of work, or do you just welcome our new robot overlords? Leave a comment, below!

Easier Smart Devices with CloudFree

If you followed my previous posts on hacking IoT (Internet of Thing) devices to make a more secure and sustainable smart home, you may have the perception that this is an overly complicated process that no sane person would pursue. You’re not wrong, and over the last year I’ve had several failed attempts at hacking devices for various reasons from the casing requiring a saw to the programming pins being inaccessible. However, I discovered CloudFree, love their products, and think they provide a simple solution for making a smart home that is truly under your control.

Note: This is not a paid posting, I am receiving no compensation, goods, or services in exchange, and have no ownership interest in CloudFree – I am simply a happy customer.

As a quick reminder, most IoT devices require an external Internet connection to function. The problem with this is it is less secure, as a random company, often in another country (frequently China) is controlling and updating the software, as well as harvesting data. Also, if the company goes out of business, this often means your device ceases to function.

I stumbled upon CloudFree as I was looking for an alternative for the Amazon Smart Plug, which is great, but could only be controlled by Alexa, and I wanted something that could also be controlled by Google Nest, Home Assistant, a web interface, or pretty much anything. As implied by their name, CloudFree sells devices that do not require an Internet connection, emphasizing user ownership and control. This sounded perfect.

Dipping my Toes in: the CloudFree Smart Plug

CloudFree Smart Plug 2

In addition to selling third-party devices, CloudFree was manufacturing their own Smart Plug, which had a similar form factor and came pre-installed with Tasmota, an open source UI. And, at $13 it was a good deal. I ordered two and about a week later they arrived.

Setup was super simple… plug it in, connect to the temporary wifi it creates and configure it to connect to your home wifi. You can also setup passwords and things like MQTT. It took about 3 minutes and the switch had both a web interface and was fully connected to Home Assistant, making it accessible by Alexa as well.

The other details were nice, too… the packaging is simple paper and thin cardboard, and the actual device looks good and seems to have quality consistent with nicer devices I’ve seen. Oh, and it has a lot of functionality for things like tracking power consumption. I ended up ordering three more, which took a few weeks to arrive due to them being backordered.

Even Deeper: CloudFree Smart Bulb

CloudFree Smart Bulb

I needed EVEN MOAR switches, and I decided to try the one other product CloudFree makes, their CloudFree Smart Bulb. This is a pretty basic 10W LED bulb that also allows you to control the color, coolness, and brightness, again with the super easy setup and Tasmota UI. I’ve just started playing with it, so I can’t give much of a review, but it seems well-made and does exactly what I was expecting. It reads “indoor use only”, but I am tempted to try it in my enclosed light post and change the color for holidays, events, or maybe an alarm. This shipment was relatively quick, switches and bulbs arriving in about a week, and $15 for the bulb seemed like a good price.

CloudFree Wish List

I am really happy with CloudFree overall – it is a great resource for finding user controlled smart home devices. They have a bunch of third-party sensors, plugs, and gadgets that all are user controlled, no Internet needed. If I could change anything, it would simply be adding more products, ideally made by CloudFree. Specifically, I would love to find a well made light switch (ideally with a dimmer), or the holy grail, a 3-way dimming light switch. But, for what they have right now, they are great and I recommend them for anyone looking for a simple way to add to their smart home.

Have you found other great sources of secure, sustainable IoT devices? I’d love to know about them – please leave a comment below!

Unlimited Freedom of Speech Fails on Platforms

On January 8, 2021 Twitter permanently suspended Donald Trump’s account, joining Facebook, Instagram, and Twitch in the censorship of the President. Many prominent voices stated this is a dangerous encroachment on freedom of speech, sometimes making comparisons to China’s government censoring the people. Having operated communities of millions of users, I believe Twitter’s biggest failure was not applying its rules consistently to all users, enabling abuses to increase in magnitude and eventually requiring the drastic response of a permanent suspension. Further, a social platform that does not censor, where complete freedom of speech is guaranteed, is an idealistic vision, but would have questionable viability and is likely unwanted in practice.

I’ll start with the basics, First Amendment rights to freedom of speech prohibits the government from limiting this speech, it does not require citizens or companies to provide the same freedom. When a person or company shuts down discussion from someone on their property or platform, that person or company is exercising their freedom of speech. For the most part, nobody has an obligation to let someone else use their property so that the other person can exercise freedom of speech.

But just because companies have the right to censor people, should they? This is a more complicated question. In theory, I want unlimited free speech, a world in which censorship doesn’t happen, because inevitably those in power, the censor, now controls access to ideas and information and will likely support their preferred narrative. In practice, I’ve learned that lack of moderation will likely destroy a platform, and moderation (a softer way to say “censorship”) is actually desired by communities, both online and in society in general.

Moderation is Necessary

Many platforms on the Internet start open and free and eventually become moderated, and a strong driver for that moderation is the abuse of the open platform destroys the value for others. Email started off great, with an inbox filled with relevant communications and eventually turned into a signal to noise ration of about 1:150, with fake Viagra and Nigerian princes rendering email nearly useless until filtering (moderation) eliminated SPAM. Message boards and social networks become unusable when SPAM and bots infiltrate, so in addition to community moderation, there is an ongoing, continually escalating battle to validate real users vs. bots. Even friendly actors can destroy a platform – when games were popular on Facebook and developers were heavily exploiting the feed for viral growth (hey, Zynga), the real social value declined as a majority of updates were about cows from your friend’s farm, and Facebook built tools to limit this game SPAM. There is always value in exploiting these open systems at the detriment of the other users, so abuse is the natural outcome.

This community desire for moderation, whether explicit or implicit, isn’t unique to online, we see it every day in society. No matter how much freedom we want for everyone, if somebody is singing in a theater during a movie, we want them to shut up or leave. We support one’s right to share their ideas, but if they are on a bullhorn outside of our house at 4:30 AM, we want them to go away. We set our own rules for private property and have laws for public property to support this moderation.

So when Twitter took action against Trump’s accounts, this was Twitter finally enforcing its policies on a user that had consistently abused the rules they established for their platform. They finally said, “like all other users, you can’t use the bullhorn at 4:30 AM either”. I am a strong supporter in our elected officials being held to the same rules that apply to regular citizens, especially since they are often the ones imposing these rules on the citizens (anyone that has been subject to a COVID shelter in place lockdown only to see their elected officials indoor dining or world traveling understands the rage-inducing hypocrisy). The editorial decision Twitter made was not the suspension of Trump’s account, it was years and years of allowing him to violate the terms they set for their platform, allowing a slow progression to eventually becoming a tool for organizing an attack on our government. It is impossible to know what would have happened if Twitter had enforced its policies consistently years ago, but generally problems are easier to manage when you address them early instead of letting them grow in magnitude and force.

Creating an Platform Without Censorship is Difficult

But won’t censoring just drive these users to build another, more powerful network, or to hidden communities where they can’t be reached? Maybe, but it isn’t that simple. A large, functional community requires the support of many companies that are effectively gatekeepers, and they have restrictions on abuses of their platforms. If you want mobile apps, you need Apple and Google’s platforms. If you decide to be web only, you still need hosting for your servers, a CDN (how content is cached and distributed at scale) and DDOS (distributed denial of service, when people kill your servers by flooding them with traffic) attack protection, companies like Microsoft, Google, Amazon, Akamai, and Cloudflare. Cloudflare is a great example of a company that has shown extreme and sometimes controversial support against censoring any site (even some pretty horrible ones), but eventually shut down protection for a site that was organizing and celebrating the massacre of people. Each of these platforms has the ability to greatly limit the viability of a service they believe is abusive, which is exactly what happened to Parler when Apple and Google determined their lack of moderation was unacceptable. There are other possible technology solutions like decentralized networks that might be able to reduce the dependency on these other platforms, but this isn’t just a technology problem.

Beyond technology requirements, what about the financial viability of a completely open platform? Monetization introduces another set of gate keepers, from payment processors, to advertisers, and legal compliance. While there will always be some level of advertiser willing to place ads anywhere (yes, dick pills for the most part), most major advertisers don’t want to be associated with content that is considered so abusive that no major platform wants the liability of supporting it. Depending on the activities on the site, banks can be prevented from providing services to the platform, and even with legal but edgy content (e.g. porn), there is a huge cut that goes to payment processors as they take a risk in providing money exchanges. Crypto can provide some options, but it is largely not understood by the average user and, depending on the content of the site, there can be legal requirements to KYC (know your customer), and liability for profiting on the utility of the site if the content is illegal. There are potential solutions for each of these, but it gets increasingly more difficult to achieve any scale.

Building on dark web is a possibility, although still vulnerable to many of the platform needs for scale. The dark web is also the worst dark alley of the Internet, difficult to discover and navigate, and the lack of moderation would mean many abuses, from honeypots (fake sites likely setup by law enforcement to have an easy way to track suspicious behavior) to scams and exploits preying on the average user that doesn’t understand the cave they’ve wandered into.

So while Trump certainly has a large base of followers and the financial resources (well, maybe) to have one of the best chances of being a catalyst for a new platform, there are many forces outside of that platform’s control that challenge its viability.

So, What’s Next?

If I had to guess, a few of the “alternative” networks will make a land grab for the users upset by the Presidential bans. The echo chamber of everyone having the same belief may not provide the dopamine response they get from a network with extreme conflict, so it may seem less interesting for the users. I also assume the environment is ripe for people to go after the next big thing, decentralized, not subject to oversight. Ultimately, societal norms will likely limit the scale and viability of these networks, and those limitations will likely be proportional to the lack of moderation.

So, all we have to do is ensure societal norms reinforce individual liberty while not enabling atrocities on humanity. It’s that simple. 😟

Update: in the 10 hours since I wrote this, AWS (Amazon’s web hosting) decided to remove Parler from their service, which will likely take the site offline for at least several days.

Update January 10, 2021: Dave Troy (@davetroy) published a Twitter thread with the challenges specific to Parler, with details about their lack of platform options.