posted by Bryon Moyer
The Internet of Things (IoT) is all about platforms. Exactly what constitutes a “platform” is unobvious and left to the reader, however. In some cases, they’re so obscure that you have to pull teeth to find out what the platform actually contains.
Many of the platforms are about the connection between Things and the Cloud (and the Phone). But Express Logic recently introduced their X-Ware Platform, targeted at IoT applications, and its focus is on basic Thing OS and middleware: all of the pieces that allow a Thing to communicate, take on peripherals, store files, and so forth. (To be clear, this platform requried no teeth-pulling to suss out.)
This follows what seems to be a trend: while embedded engineers used to piece together the elements that make up a system, now much of that work is being pre-done or pre-bundled. It would appear to serve two purposes: development goes faster for the engineer and the seller gets all the pieces rather than having their technology intermixed with someone else’s. That would seem to be a win-win (assuming that all pieces of the bundle work well).
In Express Logic’s case, they’ve bundled their ThreadX RTOS with NetX, USBX, FileX, GUIX, and TraceX offerings together to provide basic OS, connectivity (TCP/IP), USB host functionality, file management, GUI design, and “event analysis” (for debugging and profiling) capabilities.
In addition to bundling the middleware, they’ve “pre-ported” it – not just for ARM, but for specific development boards using ARM. This means that all of the board-support stuff has already been done for the specific peripherals and configurations of those boards, further saving time. Their ARM focus is described as “initial,” but ARM is everywhere, with a bazillion lines of code already out there. I’m sure other processors can get supported, but I’ll wager they have a hard row to hoe.
X-Ware also ties in with IAR’s development environment, enabling what they call “RTOS-aware debugging” – making it easier to track threads and tasks when sorting out issues.
And they’ve included 15 different demos – most of them for each of the components in the bundle, with one that’s a medical demo combining the whole lot.
You can get more detail in their announcement.
posted by Bryon Moyer
November would appear to be Conference Month.
Of course, no one has conferences in the summer. Oh, wait! Semicon West does! OK, well, most folks assume no one is home during summer, so they wait until fall. Can’t do December because of the holidays… September, well, everyone is getting back from summer… October? Yeah, a few sprinkled here and there. And then there’s November.
Conferences are a dime a dozen these days. Some are big single-company affairs (Intel Developer’s Forum, ARM TechCon, for example). Others are put on by organizations, some venerable, some newly sprouted, having decided that events can be lucrative.
So, given all the conferences, which ones to go to? Your interests and mine should be broadly aligned, since you’re looking for new technology to help with your work and I’m looking for interesting stories about new technology that will help you with your work. Given all of the overlapping conferences, I’ve been pretty choosy about which ones to attend. That’s not to say that everything I’ve declined is not worthwhile; it’s more that I’m expecting a few of them to be particularly worthwhile.
Here’s what I’m looking forward to over the next several weeks.
- Touch-Gesture-Motion in Austin 10/29-10/30: this has been my go-to event for, well, touch, gesture, and motion interface technologies. Put on by IHS, it normally ends up being a good two-day overview of what’s happening in those industries. It’s why most of my touch and gesture pieces come out in the December/January timeframe.
- ICCAD in San Jose 11/2-11/6: this IEEE/ACM-sponsored EDA conference seems to have picked up steam over the last couple years. It seems to be the second node where EDA folks focus some announcement attention.
- The MEMS Executive Congress in Scottsdale 11/5-11/7: This is the annual who’s-who confab of the MEMS industry, put on by the MEMS Industry Group. While there are MEMS- or sensor-related shows sprinkled throughout the year, this is a higher-level view, and it also becomes a focal point for announcements. Amelia and Kevin will also be here.
- TSensors in San Diego 11/12-11/13: Organized by MEMS veteran Dr. Janusz Bryzek, this is the follow-on to the initial TSensors meeting of last year. There have been other ones since then in different parts of the world; I believe this to be the flagship session where we’ll get the latest on efforts to bolster the sensor market.
- IDTechEx in Santa Clara 11/19-11/20: IDTechEx is actually the organizing entity, but it’s easier to say that than to name all of the collocated conferences happening during those two days. My focus will be in Energy Harvesting and Internet of Things, but there will also be sessions on wearable electronics, printed electronics, supercaps, graphene, and 3D printing.
- IEDM in San Francisco 12/15-12/17: This IEEE conference is the go-to event for transistors and other basic devices. No tradeshow this: it’s serious technology, not for the faint of heart. If you think you really know your stuff, then come here for an instant humbling. Apparently this year will feature, among other things, a face-off between Intel (FinFET on bulk) and IBM (FinFET on SOI). I can hardly wait!
If you’re there and notice me lurking in the shadows, don’t hesitate to say hello.
posted by Bryon Moyer
Some time back we briefly introduced Elliptic Labs’ ultrasound-based gesture technology. They’ve added a new… layer to it, shall we say, so we’ll dig in a bit deeper here.
This technology is partially predicated on the fact that Knowles microphones, which are currently dominant, can sense part of the ultrasonic range. That means you don’t necessarily need a separate microphone to include an ultrasound gesture system (good for the BOM). But you do need to add ultrasound transmitters, which emit the ranging signal. They do their signal processing on a DSP hub, not on the application processor (AP) – important, since this is an always-on technology.
With that in place, they’ve had more or less a standard gesture technology, just based on a different physical phenomenon. They see particular advantage for operation in low light (where a camera may be blind), full sun (which can also blind a camera), and where power is an issue: they claim to use 1/100th the power of a camera-based gesture system. So… wearables, anything always-on. As long as you don’t need the resolution of a camera (which, apparently, they don’t for the way they do gestures), this competes with light-based approaches.
What they’ve just announced is the addition of a 3rd dimension: what they’re calling multi-layer interaction (MLI). It’s not just the gesture you perform, but how far away from the screen you perform it. Or what angle you are from the screen.
For instance, starting from far away, with your hand approaching, at one point it would wake up. Come in further and it will go to the calendar; further still takes you to messages; and finally on to email. Of course, Elliptic Labs doesn’t define the semantics of the gestures and positions; an equipment maker or application writer would do that.
And it strikes me that, while this adds – literally – a new dimension to the interface, the semantic architecture will be critical so that users don’t have to mentally map out the 3D space in front of their screen to remember where to go for what. There will have to be a natural progression so that it will be “obvious.” For example, if you’re gotten to the point of email, then perhaps it will show the list of emails, you can raise and lower your hand to scroll, and then go in deeper to open a selected email. Such a progression would be intuitive (although I use that word advisedly).
A bad design might force a user to memorize that 1 ft out at 30 degrees left means email and at 30 degrees right means calendar and you open Excel with 90 degrees (straight out) 2 ft away and… and… A random assignment of what’s where that has to be memorized would seem to be an unfortunate design. (And, like all gesture technologies, care has to be taken to avoid major oopses…)
Note that they don’t specifically detect a hand (as opposed to some other object). It’s whatever’s out there that it registers. You could be holding your coffee cup; it would work. You could be using your toes or a baseball bat; it would work.
You can also turn it off with a simple gesture so that, for example, if you’re on your phone gesticulating wildly, you don’t inadvertently do something regrettable in the heat of phone passion. Or in case you simply find it annoying.
You can find out more in their announcement.
(Image courtesy Elliptic Labs)