> I'm continually surprised by how few software engineers in industry spend the time to pick up HDL and FPGA programming in general.
unsufficient tooling? I did some programming with FPGAs once and it seemed the best option I had for programming was proprietary software by altera (quartus). I never got debugging to work or perhaps I did and I didnt understand the stuff it was showing me (I am no hardware guy)
My impression was that an eclipse-like ide, perhaps with a built in HW simulator would make things A LOT easier, especially for beginners like me. Of course this could be completely unrealistic and impractical for hardware design in which case I will show myself out.
A free simulator (like iverilog) plus a free waveform viewer (like gtkwave) is a nice way to start fiddling with these things. Verilog is way - WAY - easier to deal with than say C, actually, because you have much more visibility (waves instead of a variable view) and much better error detection (none of those bloody memory overruns, built-in Valgrind with the "X" values, etc.)
I'm a programmer by training, and I got into hardware architecture in part because of how fun Verilog was, actually - seeing all this stuff.
Compile and run Verilog online (which is how I tested the Verilog code in the article):
Eh, gtkwave is the sort of thing that I feel allergic to. I've used it extensively in the past but at no point was it something that I enjoyed having to use. Moreso than not liking to work with proprietary software, what I really don't like to do is work with non-browser/non-terminal based programs. Those sort of tools just really don't seem like tools that were designed with users like me in mind. If I have to work with programs like that then I of course will, but in my personal time that sort of program drives me away.
If gtkwave took some high-level inspiration from graphviz, that would be great.
I've started to learn hardware design a couple of months ago and this is good advice.
However I'd like to mention that many constructs work fine in simulation but are completely backwards/impossible to synthesize in FPGA or ASIC. Writing verilog is easy, writing good verilog much more difficult.
I don't know if there are any good and comprehensive resources for learning hardware design coming from a software background, I have the chance to work with some ASIC guys who can help me with those things.
In my understanding the tooling for FPGAs has been made purposefully complicated to achieve a lock-in. The issue is that adoption of new tooling practically requires compatibility with the existing tools from the two key market holders, and I do not think even they themselves could do that anymore.
Yes and no. So on the one hand there are lots of open source efforts at both Hardware Description Languages (HDLs) such as JHDL, variations on Systems C, Verilog, and the async work that was going on at Utah State I believe. But the "place and route" bits are, by their nature, unique to the FPGA architecture. Further there is a lot of work on encrypting bit streams so that designs can't be "stolen" and you end up with a very proprietary 'blob' at the bottom of the stack. Xilinx did an "open" FPGA for a while (the 3000 series) but nobody used it at the time in volume.
That said, the complexity of the FPGA tools is also as much about the circuit capabilities (describing IO pads for example in terms of their power levels, latencies, and connectivity) as it is about the overall complexity of the problem.
The methodology for designing these things is surely straining. And of course its a very test driven practice, since no hardware engineer ever seems to just "try" something in an FPGA until they have a testbench that can simulate it. (the equivalent of unit tests in software).
Most (all?) vendors offer a bit of a free stuff, and I know the Xilinx place and route engine can take its input from any EDIF source so you can write your own 'design' tools if they output EDIF. I'm a bit scarred because there was an effort called the "CAD Framework Initiative" which was going to standardize APIs between all the layers of the stack but once vendors figured out that their high priced tools could be easily disrupted they backed out of that standard in a hurry. Too bad really.
An industry ripe for disruption. My guess is that patents and military contracts are propping up the few entrenched vendors.
Eventually, the simple FPGA designs from 20 years ago will perform "good enough" when shrunk down to modern manufacturing processes. Only then will the new age of reconfigurable computing begin.
As far as I've seen, the main users of FPGAs are developers eventually targeting ASICs.
A video codec or whatever might first be implemented in C++, then 'translated' to verilog and tested thoroughly on an FPGA. Having solved all the logical issues, and detected a lot of potential timing issues, the HDL design could be translated into an ASIC design, and heavily simulated. Then, confident that the design is good, the company could spend the bucks to make a mask for mass manufacture.
You do occasionally see people using FPGAs where they need the zero latency of a hardware design, but only need a couple of devices. Usually this is in RF research labs and the like.
I suspect most people with compute problems would be better off using a GPU.
here's a few good reasons why it's hard to make easy-to-use vendor-neutral FPGA tools:
- all the devices/bitstream formats are proprietary with little or no documentation of the logic blocks or programmable interconnect structures. it is probably technically easier to build a new FPGA from scratch and design tools for that, than to reverse engineer existing chips [1]
- there is very little cross-compatibility between vendor products (a 4-lut here, a 6-lut there, some carry-chain logic here, a DSP block there)
- all the optimizations (synthesis, place-and-route) are NP-hard problems
- sequential imperative (C-like) thinking is not the correct way to make parallel systems
- the FPGA vendors compete on tools and offer their software for free to push hardware. hard for an independent vendor to compete.
all the optimizations (synthesis, place-and-route) are NP-hard problems
I suspect this one in particular could be chipped away at pretty successfully if the tooling were less closed. When you get some open benchmarks and let both researchers and systems-builders hack away at it, you can get very good practical solutions to NP-hard problems, as the current crop of SAT-solvers, SMT engines, and TSP-routing tools attest.
Commercial "vendor neutral" tools do exist, Synplicity was a public company which sold such tools for several years before branching into proto board hardware sales and then being acquired by Synopsys [1]. Their tools for synthesis, partitioning and debug are still available [2] and strong sellers to both Altera and Xilinx FPGA developers.
While I find your comment informative, points 3 and 4 do not really explain why vendor-neutral tools cannot exist. Point 2 (my best understanding) should not be a breaking issue if documentation and specifications are available (which is covered by point 1). It seems to me that point 5 is part of the thing they have done to prevent alternative tools to come to market. Lastly, point 1 is exactly what begs for disruption.
The manufacturers also treat the exact implementation details of their FPGAs - how exactly all the routing fabric is structured, the fine details of some of the macros, etc - to be their secret sauce that gives them an edge over their competitors. So they're pretty hostile to the idea of documenting anything.
It being "hard" shouldn't give them a pass on quality control. From the early Project Navigator to ISE to Vivado, Xilinx tools are buggy, poorly documented, and completely unintuitive. Yet, they're still better than Altera's.
I moved one of our smaller projects over to Altera because they were first to 28nm plus the local Altera team was willing to help out with porting any device-specific code.
I was very disappointed with the tools. It's worth a write-up all on it's own, but my staff struggled through the tooling which consumed most of their time.
Just asking, not questioning: Are you sure they would not have similarly struggled if they were pros in Altera and were moving to Xilinx? An issue with counter-intuitive and buggy tools is exactly that you get used to them, your mind slowly forgets how messy a design the tools have, and now everything else seems counter-intuitive.
That's a good question. It's almost impossible to tell objectively. I would say that the Altera tools felt more like circa 2007 Xilinx Tools. They lacked quality control AND maturity. This is all very subjective, though.
Ugh. I remember 2007 Xilinx tools and I wouldn't wish them on anyone. I was so surprised when I started doing FPGA work again and found even the old ISE versions I'm using (2010/2011-ish) are so much better!
I have access to newer software, but the designs I'm working with don't import correctly into them. :-P
I have some opinions here and I wish I was prepared to make good comments. I've seen some horrible things. I hope you give it as thorough a going-over as you did in this article.
The problem is a lack of overlap between software engineers and hardware engineers, particularly in the open source arena.
I'm a digital designer, and do software as a hobby. I've thought about writing up a guide to get into fpga hacking. Tools are always a hindrance. Simulation has some free tooling, but as far as I know there is nothing for synthesis.
There could possibly be a kickstarter idea here to pick an architecture they have targeted and get some designers to implement this. You could even host one of these theoretical architectures on top of another FPGA. It'd be slower than the host, but feasible.
unsufficient tooling? I did some programming with FPGAs once and it seemed the best option I had for programming was proprietary software by altera (quartus). I never got debugging to work or perhaps I did and I didnt understand the stuff it was showing me (I am no hardware guy)
My impression was that an eclipse-like ide, perhaps with a built in HW simulator would make things A LOT easier, especially for beginners like me. Of course this could be completely unrealistic and impractical for hardware design in which case I will show myself out.