Jul 23, 2014

Intelligent VIP

posted by Bryon Moyer

This year’s DAC included a discussion with Arrow Devices. They’re a company exclusively focused on protocol VIP. They’re not a tool company (other than, as we’ll see, their debug assistant); their VIP plugs into any of your standard tools.

There are three distinct angles they play: verification (making sure your design works in the abstract, before committing to silicon), validation (making sure the silicon works; they also include emulation models in this as well), and debug.

Their focus is on protocol abstraction: allowing verification to proceed at a high level so that designers can execute their tests and review the results at the level of the protocol rather than at the signal level. This enhanced semantic intelligence is how they claim to distinguish themselves from their VIP competition, saying that verification can be completed two to three times faster as compared to competitive VIP.

The verification suites consist of bus-functional models (BFMs) and suites of tests, coverage, and assertions. These work in virtual space. The validation suites, by contrast, have to be synthesizable – hence usable in emulators. They include software APIs and features like error injection. Their debugger is also protocol-aware, although it’s independent of the VIP: it works with anyone’s VIP based on modules that give the debugger the protocol semantics.

One of the effects of digging deep into a protocol is that you occasionally uncover ambiguities in the standards. When they find these, they take them in a couple of directions. On the one hand, they may need to build option selections into the VIP so that the customer can choose the intended interpretation. On the other hand, they can take the ambiguities to the standards bodies for clarification.

On the debug side of things, the protocol awareness ends up being more than just aggregating signals into higher-level entities. When testing a given protocol, the specific timing of signals may vary; a correct implementation might have some cycle-level variations as compared to a fixed golden version. So they had to build in higher-level metadata, assigning semantics to various events so that the events can be recognized and reported. This tool works at the transaction level, not at the waveform level; they’re looking at connecting it to a waveform viewer in the future.

Their protocol coverage varies.

  • For verification, they cover the JEDEC UFS (Universal Flash Standard) protocol, MIPI’s M-PHY, UniPro, and CSI-3 protocols, and USB power delivery and 2.0 host/device protocols.
  • For validation, they cover only USB 3.0 – but they also claim to be the only ones offering VIP for USB 3.0.
  • Finally, the debugger has modules supporting USB 3.0 and 2.0; JEDEC UFS; PCIe/M-PCIe, MIPI UniPro, CSI-3 and -2, and DIS; and AMBA/ACE/AXI/AHB/APB.

You can find out more on their site.

Tags :    0 comments  
Jul 22, 2014

SiTime Adds Temperature Compensation

posted by Bryon Moyer

SiTime came out with a 32-kHz temperature-compensated MEMS oscillator a few weeks back, targeting the wearables market. 32 kHz is popular because dividing by an easy 215 gives a 1-second period. Looking through the story, there were a couple elements that bore clarification or investigation.

Let’s back up a year or so to when they announced their TempFlat technology. The basic concept is of a MEMS oscillator that, somehow, is naturally compensated against temperature variation without any circuitry required to do explicit compensation.

At the time, they said they could get to 100 ppb (that’s “billion”) uncompensated, and 5 ppb with compensation. (The “ppb” spec represents the complete deviation across the temperature range; a lower number means a flatter response.) This year, they announced their compensated version: They’re effectively taking a 50 ppm (million, not billion) uncompensated part and adding compensation to bring it down to 5 ppm. I was confused.

On its face, the compensation is a straightforward deal: take the temperature response of the bare oscillator and reverse it.

Figure.jpg

Image courtesy SiTime

But what about the “millions” vs. “billions” thing? Why are we compensating within the “millions” regime if they could get to ppb uncompensated?

Turns out, in the original TempFlat release, they were talking about where they think the TempFlat technology can eventually take them – not where their products are now. For now, they need to compensate to get to 5 ppm. In the future, they see doing 100 ppb without compensation, 5 ppb with compensation. That’s a 1000x improvement over today’s specs. Critically, from what they’ve seen published by their competition, they say that they don’t see their competitors being able to do this.

So, in short: ppmillions today, ppbillions later. These are the same guys, by the way, that have also implemented a lifetime warranty on their parts.

There was one other thing I was hoping I’d be able to write more about: how this whole TempFlat thing works. We looked at Sand 9’s and Silicon Labs’ approaches some time back; they both use layered materials with opposing temperature responses to flatten things out. So how does SiTime do it?

Alas, that will remain a mystery for the moment. They’re declining to detail the technology as a competitive defense thing. The less the competition knows…

You can read more about SiTime’s new TCXO in their announcement.

Tags :    0 comments  
Jul 21, 2014

Improved FPGA Tool Results

posted by Bryon Moyer

A bit over a year ago, we looked at startup Plunify, who was marketing cloud-based FPGA tool instantiations. I talked to them again at the recent DAC, and they appear to be carrying out the typical modern startup roadmap, where you start with something, find out what people really do with it, and then use that information to drive new, and sometimes wholly different, products.

What they learned with their original offering was that the analytics module was really popular. So they figured out how to harness the information to help automate FPGA design optimization in the FPGA tools.

The result is called InTime, and it rides over the top of the Altera and Xilinx tools. It does a series of builds, watching the results, and then making recommendations to the designer as to which settings and constraints will provide the best results. Notably, it doesn’t touch the RTL, so this is about matching up the existing design with the tool in the most effective way.

This isn’t a typical design space exploration platform, which tends to have an element of random. This is a directed algorithm that looks at the results of the original full runs and then uses those analytics to refine the settings and constraints to achieve results that they claim to be 30-40% better than what design space exploration provides.

Not only does it improve the design at hand, but they say it can learn over time. If you’re using the cloud, then the global tool accumulates the learning, improving over time. One thing that’s changed from their original offering, however, is the cloud focus. While still available, too many companies are reluctant to go to the cloud, so they also support local instantiation. When implemented locally, the learning will accrue to the benefit of all local designs.

You can learn more in their recent announcement.

Tags :    1 comment  
Get this feed  

Login Required

In order to view this resource, you must log in to our site. Please sign in now.

If you don't already have an acount with us, registering is free and quick. Register now.

Sign In    Register