More Hacking Smart Home Devices with Tasmota, Youngzuth and Gosund

Gosund SW2 Dimmer with wires soldered to serial connections on circuitboard

This is a follow-up from the post Hacking Smart Light Switches and Other IoT Devices… where I installed Tasmota on a Gosund Smart Light Switch (SW1). I also installed replacement software on the Gosund Smart Dimmer Switch (SW2) and the Youngzuth 2-in-1 Switch, and since the process was pretty unique for each, I thought it might be worthwhile to share my experience.

Using TUYA-CONVERT is preferred since it doesn’t require opening up a device or soldering, but it seems like all newer devices are using software that can’t be hacked wirelessly anymore, so you will likely need to open your smart home device. Now, let’s go void some warranties!

Gosund Smart Dimmer Switch (SW2)

This dimmer switch has a nice capacitive touch panel for changing the lighting level, so it feels a lot like adjusting something on a touch screen. Since Gosund also makes the SW1 switch I started with, I was hopeful it would be similar and I could avoid soldering… not so much.

Gosund Smart Dimmer Switch (SW2), wires soldered to the circuitboard to enable a serial connection.

Like the SW1, the SW2 requires a Torx T5 screwdriver to open it. Unlike the SW1, the SW2 dimmer switch has two circuitboards in it, connected by a small cable. Reading about this switch, one person claimed it could not be hacked with that cable connected – this is not true, and I bricked one of these detaching the cable… not recommended. Unfortunately, the serial connections are in the middle of the board, so the process I used with test hook clips would not work like they did on the SW1. However, the connection points are pretty big and well-labeled, so soldering wires to them is pretty easy. Once I had connections, the process was super simple to install new software, exactly like the SW1. It’s nice when things just work!

But, of course, things didn’t just work. When I installed the dimmer the dimming functionality didn’t work from the switch. Looking at the Tasmota template details for the Gosund SW2 Dimmer, this switch requires extra scripting to function properly. However, scripting is not available in the basic Tasmota software, so it needed a different version. Fortunately, once you have Tasmota installed, switching the software is easy and only requires a web browser, selecting “Firmware Upgrade” from the web interface. Unless it isn’t so easy. Trying to install tasmota-scripting.bin from the unofficial releases failed, and first required installing the tasmota-minimal.bin to get the smallest install and then installing the compressed version of the unofficial release, tasmota-scripting.bin.gz (only the .gz version would install successfully). I used the OTA (over the air) install for the minimal software (pointed to the official OTA releases), and manually uploaded the scripting gzipped binary downloaded from unofficial experimental builds. Once installed, there are new menu options in the web interface, “Configuration” -> “Edit Script”, and simply paste and enable the script from the template page. None of this was complicated, but is also wasn’t very obvious… hopefully I can save you some trial and error.

And, the switch works great and immediately worked with Alexa (make sure emulation is set to “Hue Bridge” to enable Alexa to use the dimming functionality.

Youngzuth 2-in-1 Switch

Youngzuth 2-in-1 Switch with wires soldered to make a serial connection to the TYWE3S.

The Youngzuth 2-in-1 Switch is actually two switches that fit into the space of a single switch. When I opened the switch (Phillips head screwdriver) and started looking around the circuitboard, I couldn’t find any connection points for the serial interface. I finally hit the point I had been dreading… needing to solder directly to the chip.

The Youngzuth 2-in-1 uses a TYWE3S package and fortunately a lot of details are available on the Tuya Developer website, so it was pretty easy to figure our the chip connections. I really hate soldering, especially on tiny components next to other tiny components, so I had a margarita to steady my hand.

TYWE3S pin connections, colors showing all pins needed to reprogram

Once wires were connected, installing the software was a breeze. Configuration was also easy, with an example provided in the Youngzuth 2-in-1 template.

Full disclosure, I have not yet installed the Youngzuth switch, as I made a rookie mistake, not realizing there is no same-feed neutral connection at the switch location. Once installed I will post an update if anything required extra work.

If you have any questions or different experiences with these devices, please leave a reply below!

Hacking Smart Light Switches and Other IoT Devices

Gosund SW1 circuitboard with test hook clips on serial connections

If you’ve ever had a free weekend, a desire to create a more secure smart home, and questionable judgment, you’ve come to the right place. In this post I’ll talk about how to take common IoT (Internet of Things) devices and put your own software on them.

Disclaimer: depending on the device, this exercise can range from pretty easy to drink bourbon and slam your head against the desk difficult. Oh, and there is some risk of electrocuting yourself or setting your house on fire. So everything after this point is for entertainment purposes only…

Why Hack Your IoT Devices?

Most people creating a smart home take the easy path… pick out some cheap and popular devices on Amazon, install the smartphone app to configure it, and are good to do. Why would anyone want to got through the extra effort to hack the device? There are a few good reasons:

  1. Security: With few exceptions, most smart devices require installing an app on your phone, often times from an unknown vendor and with questionable device permissions needed. The devices themselves are tiny, wifi-connected computers, and also have software that is updated by connecting to a server in some country, and installing new software on the device connected to your home network. Having a cheap device connected to your home network that requires full access the Internet to work is bad, but it is worse when that software can be changed at any time, to do whatever the person changing it wants it to do. This could turn your light switch into part of a botnet, or worse, be exploited to attack other devices on your home network. By hacking replacing the software, you create a device that works properly without ever needing access to the Internet, lowing the security risk. You can also see (and change) exactly what software the device is using.
  2. Sustainability: Since the devices require communicating with an external company for configuration and updates, when that company stops supporting the device or worse, goes out of business and turns off their servers, your device becomes useless or stuck in its current configuration forever. By hacking replacing the software, you are able to support the device even if the company ceases to exists. And by using open source software with a robust community, you will likely have very long term support.
  3. Because I Can (mu ha ha ha): Okay, this is more of a fun reason, but worth mentioning. I’ve generally been much happier with the hacked versions of my products, whether it be my Tivo, Wii, or car dashboard. Smart light switches are a relatively low-risk hack, as they are inexpensive, and I’m assuming the risk is turning it into a brick, not causing an electrical fire (I’ll update the blog if I have an update on that).

Getting Started

My adventure started with the spontaneous purchase of a Gosund Smart Light Switch. Like a gazillion IoT devices sold by name brand and random manufacturers, this switch is controlled by an ESP8266. Most of these ESP8266 devices use a turnkey software solution made by Tuya, a Chinese company powering thousands of brands from Philips to complete randos.

For security and sustainability reasons, I decided I didn’t want this switch connected to my home network, and even if I wrote complex network firewall rules to limit its access, it would need to connect to the open Internet and other devices in my house to work properly.

I did some research and found Tasmota, an open source project that replaces the software on ESP8266 or ESP8285 devices, eliminating the need for Internet access and enabling functionality that make them easier to connect to controllers like Amazon’s Alexa. The older examples required disassembling the device and soldering to hack it, which is exactly not what I wanted to do. However, more recently there was an OTA (over the air) solution that didn’t require opening a device at all, and did all of the hacking over wifi… that sounded great.

Tasmota Wifi Installation

When I tinker I like to use a computer that I can reset easily so that I don’t have to worry about an odd configuration causing problems later. I have an extra Raspberry Pi that is handy for this, and installed a clean version of the Raspberry Pi Desktop to install on an extra Micro SD card.

I installed TUYA-CONVERT, which basically creates a new wifi network that and forges the DNS (how computers translate a name like tuya.com to numbers that identify a server) to resolve to itself rather than the Tuya servers, so that when the device goes to get a software update from the mothership, it gets the Tasmota software installed instead – hacking complete.

Gosund Light Switch In Dangerous Setting
An example of poor judgment, however the red load wire is capped, as that is a not good wire to touch when the switch is on.

I started running the tuya-convert script on my Raspberry Pi and, rather than go through the full process of installing the switch in the wall, I found a standard PC power cable (C13) was the perfect size to hold the wires in place or allow testing on my desk. DO NOT DO THIS – I am showing you only as an example of what a person of questionable judgment might do. The switch powered up and on the tuya-convert console I could see it connecting and trying to get the new software! I love it when things just work.

But then, it didn’t work. While there was a lot of exciting communication happening between Raspberry Pi and the switch, ultimately the install failed. Looking at the logs, I was getting a message “could not establish sslpsk socket“, and found this open issue, New PSK format #483. Apparently, newer versions of the Tuya software require a secret key from the server to do a software update, and without the key (only known by Tuya), no new software will be accepted. So, damn… these newer devices can’t use the simple OTA update. Also, if you have older devices, do not configure them with the app it comes with if you plan on hacking, as that will update them from the OTA-friendly version to requiring the secret key.

Tasmota Serial Cable Installation

I realized I was too far down the rabbit hole to give up, so it was onto the disassembly and soldering option. The Tasmota site has a pretty good overview of how to do this, although I thought a no-solder solution would be possible, and tried to find the path that requires the least effort (yay laziness).

Gosund Light Switch Circuitboard
Gosund light switch SW5-V1.2 circuitboard, pen for scale. The connection points are the six dots towards the top, running down the right side (zoom in for labels).

Opening the switch required a Torx T5 screwdriver (tiny, star-shaped tool), and I happened to have one laying around from when I replaced my MacBook Pro battery. Looking at the circuit board, I realized that very tiny labels and contact points, combined with my declining eyesight, made this a challenge. I took a quick photo with my Pixel 4a and zoomed in to see what I needed… the serial connections on the side of the board (look for the tiny RX, TX, GND, and 3.3 labels… no, really, look). While soldering would be the most reliable connection, I was hoping test hook clips would do the job.

Since I was already using a Raspberry Pi, I didn’t need a USB serial adapter, as I could connect the Pi’s GPIO directly to the switch. Again, the Tasmota project has a page giving an example of connecting directly to the Pi. Whatever method you use, it is critical you connect with 3.3V, not 5V, and the higher voltage will likely fry the ESP8266. If you have a meter handy, check and double check the voltage. And, if you’re using the Raspberian OS, you may find /dev/ttyS0 is disabled… you will need to add enable_uart=0 to your /boot/config.txt file and reboot.

I connected the switch directly to the Raspberry Pi. There ware several things annoying about this, starting with each time the switch is connected to the 3.3V, it reboots the Pi. And since almost every command to the switch requires resetting its programming mode through a power cycle, that means rebooting the Pi frequently (fortunately it is a fast boot process).

Test hook clips connecting the Raspberry Pi to the Gosund switch worked surprisingly well.

The good news is, the test hook clips worked, which was a bit of a surprise. I added a connection from Pi ground to switch 00 (green wire in the photo), as that forces the switch to enter into programming mode at boot (it is okay to leave that connected during the hacking process, or you can detach it once it is in programming mode). I made sure everything was precariously balanced to add excitement and more opportunities for failure into the process. I was able to confirm that I entered programming mode and had access to the switch by esptool, a command line utility for accessing ESP82xx devices. Success! 🎉

The bad news is, other than being able to read the very basics from the switch, like the chip type, frequency, and MAC address, pretty much everything else failed. And, each successful access only worked once and then required a reboot. I was unable to upload new software to the switch. After researching a bit, the best clue I had was problems with voltage drops on homemade serial devices, and wiring directly to the Pi circuitboard seemed like it might apply. At this point I needed a drink, and went with a nice IPA.

But hey, once you’re this far down the rabbit hole, why stop? I decided to try a more traditional serial connection, using a CH340G USB to serial board.

Serial Killer Part Two

Apparently there was an issue using the Raspberry Pi directly for the serial communication as the USB to serial adapter worked perfectly. I validated the connection using esptool and then used the tasmotizer GUI, which makes it easy to backup, flash, and install new software on the switch. Many steps require rebooting the switch to proceed to the next step, but that is as simple as unplugging the USB cable and plugging it back in (even better that it isn’t triggering a reboot of the Raspberry Pi each time).

Tasmotizer and the default web interface to configure your newly-hacked switch

Once the new software is installed, there is one final reboot of the switch (don’t forget to disconnect the ground to 00 or else it boots back into programming mode). At this point the switch sets up a wifi network names tasmota[mac] where [mac] is part of the mac address. Connect to this network and point your browser to http://192.168.4.1 and you are able to configure your device. Set AP1 SSId and AP1 Password to your home wifi, click “save”, and a few seconds later your switch will be accessible from your home network.

I’ll provide the details of configuration in a follow-up post, but I used the Gosund SW1 Switch template following these instructions to import it, and turned on “Belkin WeMo” emulation to make the switch automatically discoverable by Alexa, without the need to install special apps on my phone or skills on Alexa. The configuration process and connecting to Alexa was incredibly easy and took less than 5 minutes.

Update January 2, 2020: I added a post on hacking the Gosund Smart Dimmer Switch (SW2) and the Youngzuth 2-in-1 Switch, each of which required a different technique.

If you’re curious about attempting this yourself, have questions about my sanity, or have other experiences hacking your smart devices, I’d love to hear from you – please leave a reply below!

Hinder, Don’t Halt: Griefing Content Thieves for Fun and Profit

The art of deterring content theft is an ongoing game of cat and mouse – generally any barrier you create to prevent theft is temporary, as thieves continue to find new ways to steal the content, so long as the value of the content exceeds the effort necessary to steal it. For this reason, it can often be more effective to hinder thieves instead of trying to stop them.

I encounter this “hinder don’t halt” pattern with others that run large services, and you can see this reflected in solutions like shadow banning. One of the most common themes I hear is the satisfaction that comes from solutions that cause frustration for bad actors, so I’m sharing one from my personal experiences…

At IMVU, customers called Creators make content that they sell to other IMVU customers. The content they create is 3D items like avatar clothing, items to decorate an environment, and ways to customize an avatar. This content creates real value for other IMVU customers, who spend real money to purchase it from the catalog of over 10 million items. While many Creators create content just for the enjoyment of creating, some do it as a business, with a few making over $100K US annually. Whether creating for pleasure or business, all Creators hated having their work stolen. And, since there is real money from the sales of content, there is real incentive for thieves to try to steal it.

At one point we discovered a site that was selling a service that would allow people to steal Creator content without paying for it. It was pretty easy to detect the service and the initial response was blocking them, which immediately broke their service completely and, not surprisingly, made the thieves quickly respond by finding a new way around the block. The block lasted less than a day and the thieves were back in business.

The next response was more fun… rather than blocking the thieves, we made their service not work… sometimes… and inconsistently. Code was added to detect thieves accessing content and randomly some content being accessed would be mildly corrupted. The corruption could be configured to occur at certain rates, on certain items, at certain times of day, and be disabled based on what appeared to be testing for the corruption. As a result, customers of the thieves started getting inconsistent results, that would sometimes lead to content failing to load and even crashes. If you are an engineer reading this, you understand why this is a nightmare scenario to debug and fix… customers are reporting different failure cases with no consistent way of reproducing the problem to understand the cause. And, since your code is working fine, the bug isn’t going to be found… you eventually have to discover that you are being served different content than is being served to legitimate customers.

The result of hindering was much more effective than blocking… it took many weeks for the thieves to understand what was happening and, during this time, we could see them getting bashed by the people that paid them because the stolen content was ruining their experience. By the time the thieves had found another solution, they had such a bad reputation that people were less willing to give them money.

If you have dealt with content thieves I would be interested in hearing your stories, successful or not. Please leave a reply, below!

Credits
Cat and mouse chase image by Jeroen Moes
Dungeons & Dragons dice by Lydia

Exposing Your Private Data – It’s Not (Just) Them, It’s You

This week the Wall Street Journal published a story about third-party Google App Developers being able to read your Gmail, which was followed by many other outlets trying to sensationalize the news. However, a huge source of the exposing personal information problem isn’t big companies providing access to customer data, the problem is customers unwittingly (or uncaringly) granting permission for their data to be accessed. And while many people are skeptical about companies like Google and Facebook handling their data, the far bigger risk is users constantly exposing their private data to relatively unknown companies in exchange for low-value benefits.

Overreaching Account Access

Many sites and applications allow you to sign-on through an account on Facebook, Google and other services. This process is known as single sign-on (SSO), and is convenient and generally secure, especially if you utilize improved security measures like two-factor authentication. However, some applications ask for more access than is necessary, and the user willingly exposes a lot of private data to a third party that they don’t really know.

This Sample Application owns you and all of your data… forever

The list of permissions presented when you first grant access can enable a third party perpetual access to your information, usually long after you forgot you granted permission.

If you are simply trying to login to a new  application using SSO, there should be very little reason to grant any special permissions. Applications that request access to private data like email, contacts, messages, or calendars will have full access to your personal data. If an application doesn’t manage your private data, it should not need access. To protect your personal data, you should only provide the absolute minimum level of access necessary and avoid applications that request more that what they need.

Untrustworthy Third Parties

Some applications legitimately need elevated permissions to provide the service they offer, like inbox management, automatic scheduling, or even shopping deal comparisons. Many of these apps only access your data in the way necessary to provide the service, but there are many that take full advantage of access to your data and leverage your data for their benefit. According to articles on CNET and the Wall Street JournalReturnPath scanned the inboxes of 2 million people to collect marketing data after they’d signed up for one of the free apps produced by its partners, and the company’s employees read around 8,000 uncensored emails.

Even if you trust the intentions of the company producing the application, security is a really hard challenge and even the best companies fail at it… if you are providing access to an unknown startup, you are putting an exceptional amount of trust in believing they have the resources to ensure proper security measures. Of course, when a company is acquired (or its assets are sold), the access to your private data is passed along to the purchaser, whoever that might be.

When considering trading access to your private data in exchange for an application, ask what you are really getting for the risk. If somebody came up to you on the street and offered you some coupons in exchange for letting them read all of your email (forever), would you make that deal?

It’s Your Browser, Too

In addition to granting companies access directly, web browser extensions can expose data from every website you visit. These Extensions in Chrome, and Add-Ons, Extensions, and Plugins in Firefox, provide enhanced functionality from password management to page translation, ad blocking, and simple video downloads. To provide these services, many extensions get access to everything you do in the browser. For example, a news feed reader has permission to “Read and change all your data on the websites you visit” – this means every page visited and all content on that page is accessible by the news reader extension… your web mail, your Facebook messages, your dating sites, medical issues you research… all available to some company that organizes news headlines for you.

As browser extensions potentially grant access to every account, extra care should be taken to ensure trust for the company and permissions before installing.

Clean it Up and Lock it Down!

Until we make progress on time travel, there isn’t a way for an individual to guarantee deletion of data leaked from previously granted access. There are a few steps to greatly reduce your risk going forward…

Eliminate access to every app you don’t use

Most people simply stop using an app and forget about the access they granted, which usually continues in perpetuity. Regularly review the permissions you have granted – you will almost  certainly find some surprises. Facebook has settings for Apps and Websites, Google has a great Security Checkup, and other SSO services usually have a way of reviewing apps with access to your data. Only allow access to apps you are regularly use, disable those you don’t, and review the permissions to ensure they match the access needed.

And do the same for browser extensions! If there are extensions you use infrequently, most browsers have the option to enable / disable instead of having to delete the extension, so you can easily grant access only when necessary.

Trust Before You Install

Installing applications and linked account creation on websites is simpler than ever.  The downside to this ease of access is users typically spending little time scrutinizing the application. If you are giving access to your private data, spend the time to understand who is getting access, and how they will use your data. A simple web search for the application and “security” or “trust” can reveal what others experienced. If the company doesn’t have a website with the ability to contact them, and a published policy about handling your private data, there is a good chance securing your private data isn’t a real concern for them, and it should be for you!

 

Did you actually check to see who you are sharing your private data with? If so, what is the craziest thing you found? Please share by leaving a reply, below!

How to Respond After Leaking Your Customer’s Data

The most recent consumer-hostile disclosure of an account breach was Uber’s leaking of 57 million accounts almost a year ago. I’d like to say this is an extraordinary event, but much like a favorite character getting killed in Game of Thrones, companies leaking customer data is just another regular occurrence we’ve come to expect. What continues to surprise me is how badly so many companies screw-up their response to a breach. The one principle that should guide companies following a breach is, “make the decisions you would want a company to make if it was your account that was compromised.

And sure, it’s easy to point fingers when it’s not you in the hot seat, so I’ll use the breach I managed as an example… The breach I was responsible for was in September 2015, when I was CEO of a company that had over 100 million registered accounts.

Initial Response

The breach was caught around 11:00 PM at night… within a couple of hours we had a fire-team of employees in the office. The priority was confirming that the breach was indeed fully contained, and then validating we understood the full extent of the breach. We wanted to communicate to customers as quickly as possible, and we wanted to be able to accurately convey the amount of exposure. Every other project was de-prioritized and employees were working 24/7 on projects related to the breach.

Thanks to some security precautions we had in place, we were able to detect the breach in real-time, limit the data that was accessed, and understand exactly what data was exposed. Also, due to the nature of the data that was accessed, the actual customer exposure was minimal (e.g. no credit cards, social security, addresses)… assuming the attacker had planned to use the data for malicious purposes, the actual value of that data was extremely low.

As we reached morning, we contacted law enforcement and legal counsel, both of which informed us that the data exposed was insignificant in terms of risk. We were also told that, because of the type of data accessed, there was no requirement to disclose the breach.

While we had a pretty solid understanding of what happened as part of the breach, we didn’t want to be overly confident, so we continued the process of going through hundreds of servers and employee computers to look for anything that might have been missed, a process that took a little over two full days.

The Ransom

Within 24 hours of the breach I started receiving emails that threatened to release the customer data and publicly announce the breach if we didn’t pay a sum of money. My response to the blackmail was letting them know I would consider their proposal, but ultimately the damage they would do is to customers that didn’t deserve to be exploited, and to employees, good people that already feel a ton of weight from the responsibility. They gave me a few days to make a decision.

Talking to Our Customers

After we had confidence that we had contained the breach, removed any attack vectors, and fully understood the data accessed, we were ready to talk to our customers. Less than 72 hours had passed, but it felt like an eternity getting to this moment.

We posted to our forums and messaged our customers individually with the details of the breach, specific data accessed, how that data can be used, and what steps to take (on our service and others) to protect against any further attack. We also disclosed that the hacker had tried to extort money in exchange for silence.

While I can’t say that any customer was pleased that the exploit occurred, many responded very positively to our handling of it. Earlier that year credit card and health care breaches of highly-sensitive data took many months to be announced, so many of our customers appreciated how quickly we moved to keep them informed.

Evidently the hacker didn’t read our forum post, as the next day they gave me the final warning that they were about to announce the breach to our customers and the media. I informed the hacker that we would not be paying the ransom, reminded them that the people they will hurt don’t deserve it, and pointed them to the forum posting fully disclosing the breach, accessible to all of our customers and the media.

Post Breach

Through a process of many, many postmortems and follow-up action items, the company continued to improve security in several areas, projects that extended many months. We understood exactly how the breach occurred, and the human component that enabled the breach. What we explicitly didn’t do is punish or threaten anybody – throughout the whole process we made all employees feel safe, which enabled people to be fully transparent and quickly disclose their mistakes, a critical aspect of quickly understanding how the breach occurred.

The moment that sticks out in my mind the most was an email I received from an employee in response to a detailed summary of the events I sent to the company. That employee expressed that they had never been so proud to be at a company, in the integrity we demonstrated to our customers, and the unwavering support for the employees. It was one of those emails that CEOs move to their “save forever” folder. 

Key Takeaways

While there are a lot of opportunities for companies to make customer data more secure, the unfortunate reality is even the companies with the best security practices experience breaches – this is going to happen. However, a few steps can provide better outcomes for all parties:

  1. Treat your customers as you would want to be treated.
  2. Make your employees feel safe. Fearful employees will conceal critical information that is necessary to fully understand the problem.
  3. Don’t negotiate with criminals. It’s bad for your customers, there is no way to enforce the criminal’s end of the agreement, and the deception is likely to be revealed at some point. Perhaps one acceptable variation on this takeaway is, if you do negotiate with criminals in the interest of your customers (e.g. to get details about how the leak occurred), still be transparent with your customers and disclose that a transaction occurred.
  4. Do the follow-up work. After an exhausting amount of effort getting past the initial breach it’s easy to feel like your work is done… make sure all of the known exploit vectors are eliminated.

 

Have you been impacted by a company’s data breach? I’d like to hear about your experience – please leave a comment!

How to Stop Me From Spying on Your Internet Usage

Yesterday Congress voted to erase privacy protections for consumers by passing a law making it illegal for the FCC to have rules to protect consumer privacy online. Specifically, this vote allows your ISP (Internet Service Provider, the company you pay for your Internet access) to collect and sell your Internet usage information without your permission. To be fair, you didn’t yet have these protections… they were just about to go into effect, and now they won’t.

Most people appreciate the right to keep private what they do in their own home and are unhappy with a violation of this privacy, but many don’t understand the potential impact on their lives, or how to protect themselves from these privacy violations.

What You Reveal Using the Internet

In your day-to-day usage of the Internet you expose to your ISP an enormous amount of data that enables them to target and classify you in ways that are valuable to advertisers, employers, insurance companies, and financial institutions.  Your ISP has the ability to sell to companies data to classify you based on health issues, financial status, sexual interests, religion, hobbies, and political views.

Every web search you make and every web page you visit is an opportunity for your ISP to understand you a little better. Searching information about depression?  Looking at the most recent coupon you got from BevMo?  Congratulations, you’re now part of the “risk of alcoholism” demographic that might be of interest to future employers or insurance companies.  Reading a medical site to figure out if that mole on your arm looks funny?  You are flagged as a cancer risk.  Searching for an anniversary present and looking at a dating site in the same week?  Divorce attorneys and real estate agents might pay handsomely to know who you are (or, more accurately, who your spouse is).

But wait, Brett – I use “Incognito” or “Privacy” mode on my browser… doesn’t that protect me?  Actually, no… these options prevent websites from permanently storing information on your browser that can later be used by that website to re-identify and track you, but they don’t do anything to secure the traffic that goes between your computer and the website, which always passes through your ISP.

But Brett, I know the little “https:” in the web address bar means secure, so I’m safe on those sites, right?  You’re better off, but you’re still leaking a ton of information… Secure websites do a great job of ensuring that the traffic sent between the website and your computer is encrypted and secure – so the contents of the interaction should be private.  However, your ISP will still have access to watching the Internet addresses you visit, so if you look at the Suicide Prevention Hotline, your ISP can’t see the specific data but they know you are interested in content about suicide. This site-identifying information is also revealed through your DNS queries (how your computer turns a URL into an IP address), and most consumers have their DNS handled by their ISP.

Okay, Brett… fine, ISPs can do this shifty stuff, but this sounds like tinfoil hat territory.  Well, maybe, but these large ISPs have a history of doing some really shady things with your data, ranging from hijacking (and replacing) your search results, inserting ads into your web pages, and secretly sending your web history back to the ISP.  The big name ISPs (Cox, Comcast, Time Warner, AT&T, and Verizon) spent money lobbying and buying votes because they are most capable of turning your private information into their profits (and they probably want a return on that investment).

You are the Product

Of course, collecting and selling information about users is the way many Internet companies (Google, Facebook) become powerful cash machines.  As a general rule, if you use a free service that doesn’t sell its products, you are actually the product being sold to other companies.  The primary difference is these privacy-selling services are optional (you don’t have to use Facebook), and you are not paying for them.

An ISP is closer to the phone company as a utility – while you may have some choice in which ISP you use, frequently these choices are very limited and, if selling private customer information is a standard practice, your only alternate choice is not having Internet access.  If you found out that the phone company listened in on your conversations and sold transcripts to other companies, you’d likely be outraged.

Which brings up the question, what protections will you have that you are not highly targeted?  You filled out a request for health insurance online, can that insurance company acquire the data to make coverage liability decisions about you based on requesting data for your IP address, if not for your name specifically?  Can I go to my local ISP and buy data because I want to understand what news my neighbors read, what dating sites they use, and what movies they watch?

Keeping Your Internet Usage Private

For the more technically inclined, there are a several options available (e.g. centralized VPN at the router, or TOR servers), but these are not really accessible for the average consumer, so I’m going to cover what I think are the two best options accessible to most people that don’t have a system administrator living in their household.

VPN

A VPN (virtual private network) establishes an encrypted connection between your computer and another server, and that server accesses the Internet and relays the data back to your computer.  A VPN prevents your ISP from seeing anything you access – they only see a single connection to the VPN server.  While the VPN does conceal your data from your ISP, you need to find a trusted VPN provider as they now have access to your data.  As an additional challenge, if you are interested in making all Internet access from your home private, a VPN is unlikely to work with all of your devices (e.g. Tablets, Roku, Apple TV, Alexa / Echo, and Amazon Fire TV).  Finally, some Internet sites (like Netflix) specifically block VPNs, adding additional frustration to this solution.

Choose an ISP That Values Your Privacy

All ISPs have the ability to take advantage of Congress voting away your online privacy rights.  The big names (Cox, Comcast, Time Warner, AT&T, and Verizon) have the most capability of leveraging your private data, but this doesn’t mean that smaller ISPs won’t also use your private data – it is quite likely that bigger companies will offer an easy revenue-generating solution that allows smaller ISPs to provide access to your data, bringing in some extra cash (tempting for small ISPs that are typically at a significant disadvantage over the big names).

However, smaller ISPs can be more committed to respecting customer desires, and may be more receptive to customer requests to maintain privacy.  For example, since the early 1990’s I’ve worked with LMi.net, who has always been a great partner for my business and personal Internet needs.  I called the owner and he told me several customers called after Congress voted and he responded, “It’s easy. We never have sold user data, and we never will.”  While big ISP’s send me weekly junk mail trying to lure me in on some great Internet package (usually including TV), I understand the value of my ISP consistently making decisions that consider the best interest of the customer.

 

Do you have other suggestions for keeping your Internet usage private? Think I’m a paranoid crackpot?  Please leave a comment!

You Are Wrong About Your Stupid Account

You’re wrong – hackers are interested in your boring personal account, you are making it easy for them to get access, and it will likely end up being a bigger problem than you imagine.

Those are the stern words I want to use whenever I witness a friend doing the online equivalent of parking and leaving a stack of $100 bills on their car dashboard in a crime-ridden neighborhood. Instead I tend to suggest some easy steps to take to be more secure, which are almost invariably met with “it’s not a big deal”. I decided to write up my thoughts, so I can just point friends to this article and hopefully help others. This is absolutely not for altruistic reasons… I’ve had multiple experiences where somebody else’s bad online security habits resulted in nights and weekends of work for me and entire teams of people. I just want to sleep.

Hackers Want Your Stupid [insert lame service] Account

It seems absurd that your Lint Sculptures Discussion Forums password is of value to anybody… it’s just you and people you’ve met over the last 15 years that love to talk about dryer lint sculpting… security doesn’t matter. However, it was 15 years ago, so you chose a really lame password at the time (like “123456”), and now that an elite hacker has broken that code, they see your basic account details (your email, IP address, real name and city you live in). Again, who cares… that’s useless. Well, except you used the same password for everything back then, so with your email and password they can run a script to check 100,000 other sites and hey… looks like your genealogy, old photo sharing, and that antique Hotmail account you abandoned had the same password. Unfortunately, that banking thing you signed up for 12 years ago used that Hotmail address, and you forgot to unlink the Hotmail address from a few other accounts, including Paypal and LinkedIn. Now the hacker has the ability to access your LinkedIn account, change account credentials on your banking and possibly access accounts you don’t even remember you had. You can imagine how this gets problematic… the ability to send and receive from your email address typically provides the ability to get access to all other accounts, if by no other means than requesting a password reset. And this is just the annoying scenario where you have to deal with correcting identify theft on your own… at least you didn’t drag your friends down.

Instead the Hacker could exploit your Lint Sculptures Discussion Forums friends of 15 years. Does everybody need a direct message and 10,000 forum posts offering black market Viagra? No problem. Or how about a few messages to trusted friends to install this Lint Sculpting Simulation program… you know it doesn’t have a virus because your trusted friend of 15 years swears it’s great. Everybody wants to be part of a botnet, right? All of these acts may seem pointless to you, but hackers have a way of generating value (and money) from these pointless acts, and it isn’t much effort (a lot of it is automated), so it happens.

These scenarios may sound ridiculous, but two years ago I was contacted by a long-time friend that was traveling abroad and all of his possessions has been stolen, his family was stranded and he needed me to send money. What was true is he was traveling with family, the rest was made up by a hacker that got enough information to know I was a friend that would help, knew when the family was traveling, and when the story might make sense. Everything hackers needed to make this happen came from accessing worthless accounts.

Steps to Making Yourself More Secure

Security must be balanced with convenience. When being secure is a hassle, people naturally find (unfortunate) workarounds that make things less secure. If you require a password that is 20 characters long and random, look around the person’s desk for the PostIt (or possibly worse, in their “passwords.txt” file on their desktop). The sweet spot is a mild inconvenience that dramatically improves security. I find there’s a few easy practices that fit into this sweet spot…

Two-factor Authentication

Systems that require two components to authenticate are substantially more secure than password-only systems. To access an account, it requires something you know (e.g. the password), and something you have, like a key. The “key” today is typically an application like Google Authenticator, or an SMS message with a code sent to your phone, both of which provide a unique code that is only valid for 1-5 minutes. Many services offer this, including Gmail, Facebook, Twitter, Dropbox, and a few banks (seriously banks, WTF?)

The beauty of Two-factor Authentication is, even if your password is breached, it doesn’t allow the hacker to access your account. So when you are are that hotel and using the guest computer with a key-logger to print your flight itinerary from your Gmail account, it doesn’t matter… the hacker only has 50% of what they need.

The inconvenience of adding Two-factor Authentication is typically an additional 20 seconds and, since many services allow you to say “remember me for 30 days”, it’s less than a minute a month (and… don’t use “remember me” on any shared machine).

Unique Passwords

If I told you I had every lock I use in my life (home, office, safety deposit box, cars, bike lock, vacation house) re-keyed to use the exact same key, you’d probably agree that it would be disproportionately bad if somebody found my bike key. When you apply this to online habits, people seem oddly comfortable with one key for almost everything, and a special key for their bank account (but online, weak keys often provide access to special keys).

Use a different (and strong) password for everything. This, of course, is a hassle… nobody can remember 150 different strong passwords, especially when you have to change them all every 3 weeks when you get the latest exploit notice from Yahoo!

One solution is to have a hard password that is modified in a way that you know for each service. As an example, my password is “nS72!la^mq” and I add the first four letters of the website it uses, in reverse… so for Yahoo! it becomes “nS72!la^mqohaY” and for Google it is “nS72!la^mqgooG”. This has a few flaws, including making it hard to change passwords, but it’s a substantial improvement over “swordfish” for everything.

A better solution is a password manager. Services like LastPass and Passpack provide a secure way for you to store and retrieve complicated passwords. Legitimate services encrypt your data in a way where they don’t actually know or even have access to your password, so a hacker that steals their database ends-up with a ton of encrypted files and no keys. While there are ways that could be exploited, these services are certainly better than any other options available at a consumer-level (and if you’re really paranoid, some make the source code available for you to keep the encrypted data only on your computer).

Whatever you do, never, ever, ever keep a password file on you computer, even if you think you’re clever by naming it “groceries.doc”.

Don’t Share Accounts

Sharing accounts invariably leads to other poor security practices, like the need to email everybody when a password changes or having a shared password file somewhere. And, when one of the people sharing your account gets hacked, this means the shared account gets hacked (and probably every other account in that shared password file so cleverly named “groceries.doc”)

This isn’t 1997 -these days there are very few reasons why each person can’t have their own credentials, especially for email. Only share accounts when separate accounts are not possible (I’m looking at you, Netflix). If you do need to share accounts, use a password manager that offers sharing of specific entries, which means that only the minimum exposure is shared and it is simple to update credentials (Passpack does this nicely).

Don’t Click Links

Okay, so the Interwebs sort of suck if you follow this rule exactly and dead-end on a website. However, for any site you are going to access and provide your credentials, enter the URL directly.

Did you just receive a weird email from PayPal telling you that Ned just paid you $42 for a lint sculpture you don’t remember selling? Instead of clicking on the “collect your money” link in the email, type “paypal.com” in your browser bar directly and see if the transaction is in your account history. Many phishing emails look and smell like the real thing because it is pretty simple to copy the real thing and send you to “paypaI.com” (see what I did there? that was a capital “i”, not an “l” in that URL) to steal your password. Of course, if you’re using Two-factor Authentication, a stolen password is less of a problem.

Secure Your Family

I used to get sick a couple of times a year… no big deal, just a sniffle every now and then. When I had kids, my health status flipped and it seemed like a couple of times a year I wasn’t infected with whatever was festering in the cesspool of Cheerios, finger paint, juice boxes and runny noses known as preschool.

My point is, there is almost certainly going to be an overlap of your family’s online account footprint, and when one person gets hacked it will likely be a vector for the rest of your family. Sharing documents in Dropbox, G Suite (Google Docs), or Amazon family all provide opportunities for a hack to spread. Protect your accounts by having those close to you keep their accounts secure (and… that is the real reason I wrote this post – pure selfishness as I protect my own accounts).

Do you have other tips or suggestions to help make the average person more secure? Share them in the comments section!