Last year, I built a DIY Raspberry Pi controlled electric standing desk. It was a great project and I’ve gotten a ton of use from it.
To move the desk, I simply had to:
- Launch PuTTY on my computer
- PuTTY to the raspberry pi
- Log in
- Navigate to the directory where the program was running
- type sudo python3 robotdesk.py
- Tell it what height to move to
It works and is definitely better than sitting all day… but we can do better than that, right?
Yes, we can. My desk is now voice-activated via Amazon Alexa!
Isn’t that better?
How does it work?
There are 3 parts to the integration:
- Alexa Skill
- Azure-hosted API
- Desk Controller
Amazon has an SDK for the Amazon Echo family devices called the “Alexa Skills Kit“. In short, the Alexa skill contains 3 things:
- The things your skill can do (called intents)
- Sample ‘utterances’ and how they map to the intents
- Instructions on how to call your program
Your program can either be an Amazon Lambda program, or it can be a HTTPS API that the Alexa service will post a message to.
It’s easy to set up a simple skill to get started. Check out the Amazon developers site for how tos, tutorials, etc. Or if you want to learn in a bit more structured way, check out “Developing Alexa Skills for Amazon Echo” on Pluralsight.
The speech assets for my desk are on github.
The API is simple and does 2 things:
- Receives commands from the Alexa service and translates the intent and parameters to a desk command
- Responds to the desk controller’s requests for command. The desk is using a long-polling mechanism to get commands.
The code is here (disclaimer: this is hack-it-together code, not production quality!)
The program has lots of room for improvement, the Amazon process for certifying skills is rigorous, and the interactions with the desk will get better as I find the terms that I want to use to control it.
Have feedback? I’d love to hear it. Leave a comment or reach me @_brentonc on twitter.