邀請函已寄出~ ^^匿名的朋友: ryanwang0627 邀請函已寄出~ ^^ 匿名的朋友: lkckyh 邀請函已寄出~ ^^Chaoyou: actibity 邀請函已寄出~ ^^匿名的朋友: anonikari 邀請函已寄出~ ^^匿名的朋友: earen.3 邀請函已寄出~ ^^匿名的朋友: ymt0116 邀請函已寄出~ ^^匿名的朋友: nooolong 邀請函已寄出~ ^^匿名的朋友: lswing6 邀請函已寄出~ ^^東彥: mutaboy932 邀請函已寄出~ ^^Jenzen: apparelmode.j 邀請函已寄出~ ^^匿名的朋友: lisa1023 邀請函已寄出~ ^^匿名的朋友: knis86 邀請函已寄出~ ^^蒼克翔: koshoaoi 邀請函已寄出~ ^^Eric宏: 7039222 邀請函已寄出~ ^^馬先生: ma926680 邀請函已寄出~ ^^d6750: d6750chuu 邀請函已寄出~ ^^匿名的朋友: firstname.lastname@example.org 邀請函已寄出~ ^^匿名的朋友: email@example.com 邀請函已寄出~ ^^dearjoe: ma926680 邀請函已寄出~ ^^ • on 2014-Mar-18 02:29:46 Ewa said. Just ate at Jui Thai Asian Cafe at 787 Bethel Road (the former locioatn of Lilly's) and they had a very interesting menu with none of the typical lunch special dishes. I had the Delicious Pork Cake ($3, very tasty) and the Hot Pot ($6) which had noodles, bok choi, napa, luncheon meat, quail egg, bean skin, and fish ball. The hot pot was saltier and more oily than I anticipated, but was delicious all the same. I plan to return with my family next week to try more dishes.
• on 2016-May-12 16:46:34 Mark said. Comment2, cd_key_za_cod4_multiplayer, 480595, skachat_ski_safari_adventure_time_152, 781127, skachat_vot_test_0917,:O, skachat_vzlomannyi_injustice_na_android_bez_kesha,%], names_of_all_pokemon_movies_in_order, >:-(, skachat_igry_na_sony_xperia_z1, 8422.
Reactor robot arm drawing android app flickers of freedom collins cooler pagepack tumblr bejita vs goku naruto grand feux d'artifice 8 septembre 2015 jumping. Shan wang s birthday cards segger j link lite jtag cable notiuni de seismologie referat maja kukovecz college outfit ideas 2016 for women piracicaba arrancada. ARM Cortex A5X CPU (64-bit ARMv8). Digital insanity keygen V2 1. Whether you're looking for tips on how to configure Flash.
This does a good job of capturing the first part of the problem. Note that Xilinx did make an 'open' FPGA part, the 3000 series, it did not do well for them. The critical thing to understand, is that FPGA users (the big ones, not the casual ones) don't want them to be open. They want the information they program into them to be secret so that their hardware is not easily duplicated by the folks in China and mass produced to under cut them. You can't patent schematics. So these folks want to have a way that a board can be assembled in China but not reproduced there in a way that isn't an exact clone (which you can get blocked as a counterfeit product). Ok, so where does that leave us?
Well it might be easier to create your own FPGA design, have TSMC make it on their cheapest process. And then try to sell those chips. And if you're saying 'Jeebus Chuck! That isn't 'easy' at all.' You would be right. And that is why a 200Mhz general purpose CPU is easier to turn into a 'custom chip' than an actual custom chip.
So where does that leave us? Well, all the bits between HDL and chip can be done, in a low scale way, either in simulation or on bulk small scale hardware. You can program CPLDs to be simple Logic Units (LU) for your FPGA and wire them together on a PC board. You can find the things that need to be parameterized (intra-LU timing, global clocks, I/O configuration) and build tools which can synthesize, place & route, and download to your 'pseudo' FPGA.
And if you have all that tooling, and you can show it works, you might be able to get a partner to make some small scale parts for you (200 - 1000 LUs) It won't happen over night, that is a 5 year plan minimum I think. And you have to get over the hump of developing tooling for an FPGA that may never exist. On the plus side it should be good for several Masters level projects and probably a PhD or two, and if your simulation work was robust you might become the 'standard model' for testing assumptions about reducibility of different constructs into hardware  Sort of the Lena for FPGAs.  A playmate digitized by USC which became the standard image processing image for research. >So how does having a closed tool chain achieve that? It slows reverse engineering, but doesn't itself prevent cloning.
I'm not sure what ChuckMcM was trying to argue. I don't know of any big companies that care whether or not the toolchain/devices are open source. They just want a way to get their design into the world.
What we (users of FPGAs) care about is price; the toolchains are sufficient (though yes, cruddy in various ways). And to be honest, this conversation about the FPGA (and ASIC) toolchains being closed sources crops up again and again. They're closed source not because Altera/Xilinx/Microsemi need to keep secrets. They already know eachother's 'secrets', and the FPGAs themselves are rather trivial devices. They're closed source not because users want them to be; again they couldn't care less.
They're closed source because the userbase is absolutely tiny. A small userbase means we will only ever see tools that are sufficient. There is no benefit to Altera/Xilinx/Microsemi to put in extra time and effort open sourcing their work. And honestly, I wouldn't want them to. I'd rather their time and resources go into continuing to drive the price down and features up (as does every other company).
Not that I don't think opensource toolchains/devices wouldn't be great. I follow the MiGen/Milkymist mailing list where some open source FPGA tools are being developed. But in the commercial space, open sourcing FPGAs and their tools isn't the highest priority. Going back to protecting designs, that's what bitstream encryption is for, which all modern FPGAs possess. Though Altera was quite late to the ballgame on that front with their Cyclone series. >I'm not sure what ChuckMcM was trying to argue. If you create a circuit board which has all of the same chips as someone else, if you've changed the layout, you are completely within your rights to sell it as your own and keep all the money.
However, if you copy firmware, then creating a circuit board with the copied firmware is a copyright violation and you can be found guilty of copyright infringement at most of our trading partners. For folks who built cloned CPU boards with a copyrighted BIOS they re-implemented the BIOS in a clean room to have their own 'work alike' version, which then gave them the right to sell their boards.
If however you copy an FPGA bitstream, which is the compiled output of a copyrighted HDL description, you're in violation. But building a 'clean room' implementation of an FPGA bitstream is generally infeasible as it requires you to both know how the manufacturer encodes and encrypts it, and all of the functions of the internal chip. A really great example of this in action was that the recent flap over 'clone' FTDI USB to serial chips, wasn't a clone of the chip at all.
Instead the cloners used an ARM M0 CPU with a USB peripheral and a UART peripheral and a bit of code that responded in all of the same ways in which the FTDI chip (which was presumably an ASIC) responded. That 'workalike' implementation was only possible because the set of inputs and outputs for that system are constrained, and the only IP violation was the re-use of FTDI's VID/PID pair. If they had been forced to use a bitstream file for an FPGA they would be liable for up to $30,000 per copy, and jail.
This feels like circular logic to me. We do not want the hardware companies spending time 'open sourcing' their docs because they need to spend that time and money 'making more features' which they wouldn't have to do alone if it was open, but we can't have it open because spending the time and money 'open sourcing' their docs. I think there is a big difference between 'users wanting tools to be closed source' and 'users not caring if their tools are closed source'. And I would argue that even if the primary user base does not care, we can do better than that. Old Crow Medicine Show Discography RARE.
It is no excuse to use these fragile lumbering behemoths of bad design and super seeeeecret tricks that you can learn in a university or the internet. And history keeps showing the problems of big black boxes with the words 'trust us' written on the outside. All we need is the layout and a map of the bitstream and open developers will do most of the work for these companies. As for spending their resources 'driving prices down', that is quite relative. And if you look at the pricing model of their software it is clearly not their goal to make that reasonable (at least Xilinx). And every version of the software stretches to provide arbitrary bullet points on the back of a box that either mean nothing, were tested in suspicious conditions, or conflate multiple optimized tests together and say 'we are better than everyone at everything.
How many times will that be said by all competitors at the same time, and how many times will it be believed? Anyways, with all those features, somehow ISE is still unstable on windows. 'get to the level' of ISE. I do not want to build an IDE. I want an open toolchain, the thing that the ISE interface calls.
There is some math in there that takes certain skills to implement, but I know several people who could do it if they had the layout of the chips. MAYBE the reason there are no open toolchains 'on the level' of ISE and competitors is not because Xilinx has some amazing secret sauce and is instead because the whole industry is caustically secretive and horribly anti competition. This is an interesting story. An unfortunate end. Most of these things sound like short term issues that get resolved as the tools are made. Alternately Xilinx could have just put the docs out there and said they only support ISE.
That way if an open source developer makes a tool they have no responsibility, but can reconsider in a few years in case it gets wide acceptance. I am not a market strategist, but supporting everyone's custom tool sounds like a quick way to waste money and the wrong way to try and open source something. Similar to respecting every coupon people bring in, even hand written ones. From what I've read of internal leaks, the leakers get caught and charged and dealt with, which discourages other leakers perhaps. But there isn't anything stopping an employee from selling or stealing company secrets. That hasn't changed. But during the 70's and 80's a lot of manufacturers stopped printing schematics in response to clones appearing which used a circuit that was close (but not exactly like) theirs and sold at a deep discount.
Early computer board manufacturers used programmable logic to create special 'nodes' within the system which obscured the functioning of things such that to clone them you would have to hire an engineer to understand the design and re-design it. Generally that was more effort than most of the cloner companies were willing to go through. (apparently if they had their own engineers that could understand the circuit they didn't need to clone, they could design their own) FPGAs get you at least two layers of obfuscation, both the bit stream is encrypted and 'decompiling' from the place and route configuration bits is not straight forward at all. Folks who have gone down the path of trying to figure this stuff out (as the OP is possibly doing) discover that changing a routing bit way in the front changes a bunch of other bits later but in very subtle ways. And routing in an FPGA is a great way of screwing up function, when you do timing analysis if you have to run at 200Mhz you have to make sure your setup and hold times are accurate to within a few nanoseconds across temperature and voltage differences. FPGA vendors tout this as a 'design security' feature some times. Decoding this data is a lot more challenging than reverse engineering an image format.
 And yes you can argue this is just business, and it is, and this is just 'business defense.' Having done my graduate studies in FPGA architecture and software, I can definitely see where the author is coming from. In fact, it seems like the entire hardware development industry has to face the issue of most tools being closed-source. Although I don’t have a solution for the technology specific phases (Place and Route, Bit Stream Generation, etc.), I am actually part of a company that is trying to help solve this problem higher up in the tool chain. One of the biggest barriers to developing EDA (electronic design automation) software tools is knowing all the nuances of the various hardware-description languages like VHDL, Verilog and System Verilog.
We learned this the hard way through our previous startup (later acquired) which built a hardware verification tool. I don’t want to be self-promoting here, but in case anyone in this thread is interested, we are building a platform called Invio that lets you build your own EDA tools. We try to solve more than just the language support side of things and all of our platform’s inputs and outputs are open standards: Python, TCL, Verilog, SystemVerilog, VHDL, etc. You can look at my profile to find more info, or google “Invio”.
I've had similar needs but couldn't find an existing tool, nor a parser on which to build. There are some open source parsers , but they don't seem to do preprocessing and hence lose a lot of context. So I made a parser that might work for your use-case given some work: It works for some fairly big codebases, so I know it's not completely broken. I'm not very proud of the scala code, it's quite ugly in places.
But at least there are some tests:p : this also seems active worth checking out: EDIT: added link to other parser. I've been wondering recently how viable it would be to implement an FPGA-within-an-FPGA: * Create a model of a simple, open FPGA.
* Create tools to support the open FPGA. * Synthesize the FPGA for an existing proprietary FPGA. * Install bitstream in SPI NOR. * Ignore the proprietary FPGA from now on - work with the open FPGA that's inside it. Yes, it would be absurdly inefficient. However, you can buy pretty huge FPGAs for very little these days.
If the ratio of host:target LUTs isn't too bad, and you can find constructs which synthesize efficiently, it might yield something usable. Perhaps it could be a way of bootstrapping an 'open' FPGA effort. The JTAG mess resonates with me. I probably have 10 different debugger dongles now and the higher end ones from GreenHills or WindRiver are pricey.
As to open FPGA tools, I think the main problems are: - there's a ton of competitive advantage in the algorithms to generate the bit stream so Altera and Xilinx have no good reason to give that up until a viable competitor emerges - FPGAs are usually in small to mid size designs so it's hard to scale up to the point where just selling the chips makes enough money. I am sorry you suffer from the dongle collection woes too. The only reason I can see to buy expensive dongles are the ones that have a big blob of RAM on them for quickly reading debug output from the chip without slowing it down, and ones that can adapt to highly exotic voltages and pin outs automatically. But usually the software for these only work in windows, have secret drivers, and are NOT worth 2 thousand dollars. Xilinx and Altera would not have to give up their custom algorithm. It is like building GCC instead of using Intel's compiler.
We just want to know what the chip's instructions are, we will find a way to make it optimal ourselves. In the microcontroller world, expensive debug dongles are essentially hardware keys for the software included. Lauterbach's Trace32 for instance don't even have software keys, the license for their very extensive debugging software is stored in the dongle itself. Another piece of software that isn't often mentioned in these discussions is the driver for the embedded flash memory inside parts. USB has quite poor latency (1ms) and flash peripherals require quite a few read-modify-write operations, which soon adds up. More expensive JTAG dongles can run these operations locally on the dongle, with USB just for bulk data transfers, or even download and run code on the target itself for even lower latency, greatly increasing download speed. Since this software needs to be tested for each target, requiring a physical chip to be bought, this gets very expensive to develop.
Even a 'cheap' dongle like the Segger J-Link has an incredibly long list of targets, and I suspect a substantial amount of the purchase price is in engineering and testing flash drivers for those. ARM seem to be trying to solve some of these problems in the ARM world with CMSIS-DAP, an open standard for JTAG/SWD dongles USB protocol. It uses USB HID, so no drivers should be needed on any platform, and they have even created an Apache licensed implementation, which Freescale are now using in their FRDM boards.    . Very well written. I had no idea some companies went so far as directly making their debugger the software key.
That is intense but makes a lot of sense. In the case of the Xilinx Platform Cable I wrote firmware to, they have a CPLD that does the JTAG writing of data, and a Cypress FX2LP handling the USB information. They use a quad buffered USB endpoint to let the PC load as much crap as possible into USB and in my experience the buffer fills up fast and the jtag work takes the time.
Of course this is only because multiple 'pages' were preloaded, otherwise the delay to request more pages would be insane. Are you talking about the flash driver for loading external SPI flash? If so you likely know this but you have to load the FPGA with a program to load the flash with the program you want the FPGA to load on reboot.
Each of these programs have to be compiled and tested per chip. While looking through OpenOCD I found they had drivers for the actual flash, and I am not sure if this is required for the fpga flash boot strapper I just described.
I will look over these links. I recently started messing with ARM chips and they are programmed very differently than FPGAs so I have some stuff to learn on that. >Are you talking about the flash driver for loading external SPI flash? I was slipping into talking about ARM microcontrollers there, which have internal flash. Reads from it are memory mapped, but writes generally require feeding peripheral registers with commands, hence the latency sensitive read-modify-write. Some ARM microcontrollers do have more direct methods though, some of Freescale's Kinetis range have a feature called EzPort, where you hold a pin low on boot and it pretends to be SPI flash instead, and tiny 8 bit micros like the AVR all have something similar as full JTAG would be too big for them.
>If so you likely know this but you have to load the FPGA with a program to load the flash with the program you want the FPGA to load on reboot. Each of these programs have to be compiled and tested per chip. Yeah, this is what I meant by having code run on the 'target', this being the microcontroller you are programming. You are right in thinking this means the JTAG dongle no longer needs to know how to program flash itself, but it still needs to code to load in and instructions on how to do it (setting up clocks etc), which are also very platform specific.
You could do the same for external flash, but its often easier just to connect to it directly and bypass the microcontroller. Feel free to PM me if you have any more questions. Xilinx's tools are horribly buggy.
And they don't want to get bug reports from you anymore unless you're a top-tier account. So their tools are a shit-show and they don't want to hear about how they can be made better unless you're already buying a lot of their parts every quarter. I don't have any experience with Altera's tools to be able to comment, but I don't hear good things from their users either. The only way around this is to create a completely open source FPGA architecture. Most of the basic patents have expired so this should be doable now (it wasn't doable 10 years ago because too many of the basic patents were still in effect).
An open FPGA architecture is the only way we're going to get open FPGA tools. You'd think that if such an architecture were created that several semiconductor companies could then produce parts. Often times there are clever optimizations and work around that have to be implemented to work around internal limitations that they don't publish so as not to give their competitor marketing advantages. What we should be pushing for are cross platform tool. Being open-source isn't something that I necessarily would care about as an EE in this particular case.
It doesn't get me anything I don't with vendor tools. The BIGGEST thing in FPGA/ASIC design is certainty. Error in the tool costs me time and money. You'll find it difficult to convince anyone to a tool that isn't supported by the vendor because errors and bugs, in the tool or the tool data, won't get resolved quickly. Most of the money in ASIC/FPGA is spent on 'verification' either as pre-build/pre-tested cores or as tools that do verification, such as formal logic tools. Hmm, that is a way I did not think about. Documenting hardware limitations for compilers gives competitors leverage for saying theirs is better because it does not suffer from X.
I understand that open source is not particularly important to you, but I am a bit more skeptical about the verifiability of a product that is all secret sauce and promises than I am about something with open check-able code and test suites. Open source software very rarely tries to hide its flaws to prevent a PR issue and then lazily fixed in the future because it is 'low priority', instead they are fixed by whoever can, verified, etc. There are always counter examples, but I thing the verifiability of the tool is in the same world and of similar importance to the verifiability of the output.
You do make a point in the catastrophic cost of a screw up when casting an ASIC though. FPGA/ASIC interchangeable for the purpose of verification. There are two types of verification: A) does the post synthesis gate match my RTL B) Does my RTL do what I want it to do, i.e. Matches the spec. For B you have a whole host of third party IP, verification library, assertions, etc.
That you can use For A there are formal verification tools. They mathematically match A to B. There is no need for that tool or anything in the chain to be open. Synthesis is complex and an optimized synthesis is very important.
Timing closure is where a lot of this stuff comes to the forefront and that's the part that vendors won't release. That's their secret of what sucks in their chip, or what workarounds they have to use. You'll pry it from their dead cold hands. As an engineer I don't care about their secret.
I care about making sure that I don't have to chase down a synthesis bug and that their compiler gives me the most optimized, fast design. Synplify used to be a third party product that did FPGA synthesis (Synopsys bought them). They discontinued it, even though though they had full specs/details from Xilinx/Altera.
The main reason is Xilinx/Altera tools are excellent. They know their chips better than anyone and for marketing purposes it is in their interests to give you the tool that does the fastest design, or smallest design. Otherwise you would switch to their competitor. I love the idea of what OP is trying to do but it is a solution in search of a problem. There is a bigger problem that an open source tool could solve and that is Verilog simulation. Currently we have Icarus Verilog but someone should improve it and add SystemVerilog support. Simulation is much easier to solve, since there is a spec to design against, and has many more users.
There aren't good, inexpensive simulation tools. Simulation is as important as compilers. Imagine a world where gcc didn't exist and you had to pay to get a good compiler. People will always stick to the big vendor for synthesis. I can't imagine a day that they wouldn't.
One nasty little caveat is that Xilinx's tools will not synthesize certain elements, such as state machines, in the way you've written them for performance reasons. Caught me out - I was trying to use an open source design someone else had created with Synplify, and it used a Grey-code state machine to cross between two clock domains. Xilinx's synthesis tool replaced it with a one-hot state machine which was not safe for this purpose and worked for a while then randomly wedged itself. To be fair, the synthesis log did mention this in amongst the huge pile of other messages. OP here You are right that my particular project is not the biggest piece, but it is the part that pisses me off. I can deal with Xilinx's crappy compiler if I can program my chips with ease. Or maybe I should say UNTIL I can program my chips with ease.
Then my focus will change hehe. Icarus works but it has been maintained by one 'eccentric' guy for quite a while and the code base is semi unapproachable. I believe this is why the Yosys guys started from scratch.
You are also right that the place and route, as well as verification of the LAYOUT in the chip are super big problems. It is in fact what my friends toying with the compiler side are dreading dealing with because we have no idea the delays of individual traces in the chip. I disagree that people will stick with the vendors tools. Open compilers dominate most of the Intel CPU market besides on windows since Visual Studio is the only thing that deals with all the quirks reasonably well. But the windows case is more of a lack of interest.
ARM compilers are more interesting to me because most people use Keil. But it feels crazy retro paying for a compiler. I believe the only reason that Keil is being used for most embedded ARM projects is there simply are not enough people with compiler knowledge using ARM regularly yet to pour enough work to make a better alternative. But it will come.
And one day we will have synthesis tools for FPGAs that are comparable to the big guys or better. I fully support the spirit of what you're doing btw. I hope you don't take my comments as somehow dumping on your work;). What is 'crappy' about Xilinx tools aside from the UI? I'm genuinely curious. I don't think the comparison between synthesis tools and compilers hold. They are different beasts.
I think compiler equivalent is sim tools which should be open and free. Everyone that I've talked to who uses Keil uses it because of SUPPORT. If they need something or there is a bug they know it'll get fixed. That's the ONLY reason anyone has EVER cited to me. It is not true that lack of alternatives is due to lack of knowledge. There just isn't a big enough market for it. ARM itself also makes all the patches for GCC tools.
I use gcc toolchain as do many people. Haha, not at all. I do not take it as an attack or anything:). The UI is obvious.
The inability to use most of the intermediate file formats for anything since they are secret formats is annoying. If I remember correctly it had dependencies on Java and Mono.
The command line tools are archaic and very difficult to use even before realizing they are not documented out of the fear of giving something away about how anything works. The iMPACT for programming chips incorrectly loads the libusb.so file and has to be LD_PRELOADed on linux, but even then there is some weird race condition that makes it work 1 out of 4 times. In order to do anything you have to download and install 15 gigs of data and agree to aggressive licenses. The compiler tool chain usually is a compiler and a linker. I could understand if you said that the place and route was more of a linker step, but people often call the full process 'compilation'. The fact that stitching the modules together and actualizing the equivalent of addresses and instructions (in CPU terms) is a physical fitting and box packing problem in an FPGA does not make it a fundamentally disparate step to me. Wow, I did not know that about the ARM compilers.
I must have got the wrong impression from forum posts on the use of GCC for ARM chips. That makes me pretty happy, particularly that ARM is helping with the open tools. I will say though that this feels like it is supporting my point because people are using the non vendor tools.
If support is a concern, support is not something that only a big company can provide. Postgres provides support and adds features on auction. I will admit that the responsiveness of Oracle for their customers having a need is much bigger and more organized, but it better be for what people pay for that.
With the Oracle example, I think I am leaning towards 'there will always be room for a proprietary solution to handle edge cases of a market' instead of open solutions will not work as well as proprietary ones and are unable to be the defacto standard. Xilinx iMPACT (the FPGA programmer) bugs: On one version, it can load the project it saved, but then crashes when you go to program. So you have to scan and reload all the data files every time you start it. Later version, failed to program the chip at the last stage of the process.
Diagnosed that over phone tag where the user was non-technical and on a machine not connected to the Internet. Exact same chip as above. Another later version has different iMPACT bugs than that first version above, but right now I haven't been able to get it to work on any Win 8.1 machine so I don't remember which bugs it has. Not just FPGA tools, pretty much the whole EE industry. Few days ago I had trouble saving results from a semiconductor analyzer.
Path had spaces in it. If you have the time to display a warning that it doesn't support spaces (on windows!), you should be able to fix the issue.
Just imagine how bloated their codebase that you can't fix issues like this easily. Recently I was trying to use ModelSim (HDL simulation tool) on a Linux machine, which unfortunately was not running a 10 year old distribution. It was failing with obscure Tcl/Tk errors:). Should have known better not to waste my time.
Reinstalled it on a windows machine and now it was having problems connecting to our license server so I had to use a crack from a Chinese BBS (ugh). I really want work to be done in this area, but one of the giants will either sue you into oblivion or make an offer you can't refuse. Either way nothing productive will get done.:(. I am OP astrodust: That is exactly the problem:). Turns out ISE (Xilinx's tool) has several command line tools that are run by the GUI in order, but the arguments are not documented. I was able to make a Makefile that did the ISE compilation so I could edit in emacs, but that was way too much crap to be real.
The new generation of Xilinx's tools, vivado, is written in Java and does not output any temporary files so during compile it can fill over 32 gigs of RAM and crash. Xilinx suggests that until max ram gets higher to use ISE for larger chips. Kornholi: I do not want to insult hardware engineers since they are very intelligent, they just often do not respect the same things software engineers do. I went to an Atmel event where they were demonstrating how to use their new ultra low power chips. It turns out that most of the people who went were software people driving down from SF. They started off by reminding us that they have a new IDE based on the powerful and versatile Visual Studios.
Everyone in the audience groaned. Almost everyone in the audience asked for assistance to find where the Makefile was. The following exchange happened over and over. Host: Oh, but you see, with AVR Studios, you do not have to _worry_ about the Makefile, it does it for you.
Guest: yeah but I do not want to use it. Why would you not want to use the tool we provide, it works. It seems to me that most of the hardware industry will take whatever tools they can get, even if they have to have the company pay several thousand dollars.
The ones I have met forget that they can write tools. When the tools need to be created they just pick whoever is the most comfortable in Java, or outsource it to anyone. This all sounds very critical of professional hardware engineers, but it is not exclusively their fault. It is also a culture thing of the companies and the industry.
Everything feels to me like how software engineering was (as I am told) in the 70-80 where everyone is super paranoid about secrets, and rushing to be the first. There is reasonable concern for being the first. If you build the first flash ram chip, 40 years in the future when we have moved from flash to crystallized light or something, everyone who wants to compete will be using the same pinouts you picked for your first product so that boards never have to be redesigned. As for ISE and vivado, I hear it suffers from design by committee. Where every feature has to be checked off as working before it will ship so there is a new bullet point. Hell, they have a C to fpga compiler which you would suspect could do some crazy things and took thousands of engineering hours to make work.
But instead it just implements a CPU in the FPGA with slightly accelerated operations for your setup, completely missing the point of FPGAs. I used to work in HW. You are kind of insulting but it's true: we take what we can get.
I am not sure what you are suggesting HW engineers do. Who has time to build a better tool? That's a massive task. When you are on a project, you work on delivering it. Projects fork out several thousand dollars for tools licenses because revenue for the finished product is in the millions of dollars. The Xilinx tools are crap. But you just work around them.
95% of HW engineering is fixing the actual design. It doesn't matter that much how good the tools are.
It's like asking a truck driver to build their own truck. They're paid for getting stuff from A to B on time. How well HLS does highly depends on the source code (this is in general and not specific to Vivado HLS). If your code is a simple loop over an array and you add a vendor-specific #pragma directive (such as #pragma unroll) the tool will unroll your loop and extract the parallelism from there. This actually works quite well in practice for regular DSP code (like FIR and FFT) and floating point. Anything else is another story though.
The thing is that unless you're writing your code as the tool expects it, with the proper pragmas etc. There's no way it can be transformed to fast hardware. A way around that is for vendors to ship 'customizable IP' kind of like Altera's Megafunctions.
So much for portability and high-level. I'm not sure which tool OP is referring to though, I remember Altera had a C2H tool that they discontinued in favor of their OpenCL SDK. First, just wanted to say that I find this topic really interesting. I'm trying to understand who the target user is for open-source FPGA tools. For many hardware companies, the risk of using an unproven tool is too severe. Unlike software, you can't just push out a patch if there is a bug.
I mean, in theory, I guess you can since FPGAs are reconfigurable, but it is probably not very straightforward from a deployment point of view (I actually don't know so feel free to correct me). So, since you cannot push out fixes easily, it's quite scary for the engineering teams to work with unproven tools. For the consumer electronic space, product life-cycles are quite short (mobile phones get updated every year!). So you definitely don't want to risk cutting into your product's time-to-market due to bugs in the FPGA tools. Also, the bigger the company, the more likely that they will have high-priority direct-support from the FPGA Vendors.
Whereas, it's probably harder to get support for open-source tools. So I can't see consumer electronic companies choosing open-source tools. So again, I'm not against this work at all! Just trying to understand the target audience.
>(I actually don't know so feel free to correct me). Okay, I will:) Many FPGAs load their program from a SPI chip on board, which can't be reprogrammed. However, it's increasingly common for another microcontroller or SoC to be on the same board. In this case, it's cheaper and more convenient to store the FPGA bitstream on that controller's flash and send it over SPI to the FPGA on powerup, which makes upgrades much easier, too. On the Xilinx Zynq, a FPGA containing two Cortex-A9 cores running Linux, you can simply cat your bitstream to a character device to reconfigure the FPGA.
OP here I get there is a big difference between hardware and software, but I feel that certain parts of the gaps are closing. They will not CLOSE, but as they get closer we can learn from each other.
With that said, to address your point: Linux is one of the most dominant systems for servers. Google, Facebook, Twitter, etc use it for almost all of their backend systems.
They use it not just because it is free but because it is proven and scalable. Open source databases and web servers also enjoy considerable (sometimes dominant) share. There are companies that provide support contracts for Linux (Suse, Redhat) and that model has worked to fund other open software. So I think that establishes that a company will use open software. As for being scary, you are correct that is a primary decision making motivation.
It is why many professionals in tons of industries still run windows XP. Often they NEED XP because the software they rely on is so dependent on that specific ecosystem that huge rewrites are necessary in order for it to work on anything else, and this work (even if done by the original distributor) will take years for the tools to be stable again. This is not good engineering on the part of the tool designers. The crippling fear of NEVER TOUCHING ANYTHING is something that the open software alternatives seem to be alleviating in corporate environments since the late 90s.
The lifecycles of putting a phone together is much faster than Intel designing a processor, or Xilinx designing a new FPGA. Building boards in phones and laptops is a complex game of Lego and placing them around so the timing diagrams work in all expected use cases. And if the timing does not work in some cases, whatever, people get a phone a year. Hell on my Galaxy S3 Verizon said 'oh yeah that is a known issue where sometimes the GPS just does not work because of loose wires, that was fixed in the S4, I think, which is only $$$.' You are also correct about direct support from the company. In face make Xilinx enough money and they will share their secrets with you. But I am more targeting small to medium development.
If open tools become popular and stable enough to get market share in hobbyist up to medium sized companies, it will only be so many years before bigger companies hire new engineers who grew up using open tools and have no patience for tools that have to be emulated in windows XP and have 35 nested sub menus for enabling a feature. I do not feel attacked at all.
You brought up interesting points. I hope i did them justice and did not ramble since there were a lot of points to hit. OP here dominicgs had some good ideas since software defined radio is very powerful.
One extreme example of something you could do with a reconfigurable radio signal processor in your machine is install things that the FCC would not let be produced and sold. For example if someone reversed the military high accuracy GPS stuff, and produced a verilog spec anonymously so he was not sent to prison forever, no one would be able to use that to produce chips in consumer devices. But if we found verilog and wanted military GPS on our phones, all we do is install software. Another example some friends have been excited about is emulation of older hardware, including video games, but much more. I will describe old games since they are closer to people's hearts.
All the old cartridge based games had the actual cartridge contain half of the hardware needed to make a full computer. This immediately complicated emulation. But the real issue comes from quirks in the actual hardware. This is an issue because developers had no API except the exact hardware specifications that every console had. No one worried about checking their clock frequency was correct and dividing it to the right point so they got a certain framerate. Everything was constant. Developers would find and exploit undocumented glitches in the hardware to make things faster, or sometimes to make them run at all.
Hell, some games used the difference in clock frequency between two internal chips as a source for PCM sound to make little bleeps so they could save code. For this last example this means that unless you are emulating to the clock cycle each of these chips and sometimes their exact internal structures, some game will not work or crash randomly. The CPU power to do this is insane: a 3GHz core was the minimum suggested processor for running a highly accurate NES emulator in real time.
FPGAs let us implement the actual system, bugs and all, in hardware and not worry about CPU usage. Give the fpga a blob of shared memory for writing video and PCM data to and the CPU just passes blobs of memory between devices. Programs like Postgres could implement highly optimized methods of certain algorithms like hashing for looking up in an index. The kernel could detect it has FPGA space and offload some of its work to the chip. I hope these have sounded interesting. In the future if my tool for loading the chips works and is adopted, I hope to define an interface for using FPGAs directly in the computer similar to how OpenCL defines an interface for GPUs. Hopefully I would not follow the shit show that occasionally is the Chronos Group.
Some people will say that there are already pcie cards with fpgas on them and that this is not a new idea. But the issue is that existing cards like Digilent make are there for building pcie card designs.
The FPGA (usually one) on the board has to implement the pcie fabric plus whatever it wants to do. Imagine if when you write an OpenGL/CL program you had to deal with all the DMA work and quirks of the underlying setup. Instead the grid of fpgas should have standard ways to access certain mapped regions of memory through a different chip that handles the PCIe transport layer stuff. If this was available then adding FPGA support into POSTGRES would not require bypassing the HAL of your kernel or worrying about timing details of something fundamental to modern computers (and thus software engineers) like PCIe. Obviously the FPGA is still something that is hardware specific, but who knows what could be done with that. Even if we had to Synthesize from raw HDL every time, at least loading the chips would WORK.
We could start using these chips to solve problems instead of thing slike a one off project by an EE major to build a grid of FPGAs that process bitcoin. The bitcoin work was impressive but it bothers me SO MUCH that they used reconfigurable devices to build a solid card with a fixed function.
I can't speak for OP, but my interest is in signal processing. DSP and FPGAs are a great match and with the boom in software defined radio, having reconfigurable DSP capabilities is extremely useful. For example, if 802.11b devices had been built using FPGAs, an upgrade to 802.11g could have been an OTA update. Or a new Bluetooth variant emerges (e.g. BLE) and support could be added to computers and phones overnight. For these applications it's useful to think of an FPGA as a chip that you can patch, upgrade or reconfigure for new applications. However, looking at it from another angle, we can think of it as software that isn't limited by the CPU architecture.
This second category opens up the possibility of crypto algorithms that don't suffer from the timing attacks that they do on the CPU. Or a video codec that can be designed without having to worry about which extensions the CPU supports.
I'm sure there are much better examples and some of these are bad ideas that would work better on a CPU, but hopefully that gives you some ideas. Sure, ASICs are always going to beat FPGAs and CPUs in power consumption for the same task, but there are also applications where the ability to update the system far outweigh the power requirements.
Interestingly, SDR is being used for the UK's small scale digital radio station trials this year. Off the shelf SDR hardware appears to be performing well enough at a lower cost than bespoke DAB hardware. Again, this fits in with your base station category. You're right about timing attacks, I have no idea what I was thinking there. Would they be better for power analysis attacks? I guess if we're concerned about that then we'll end up back at ASICs again. Open FPGAs would be disruptive to the status quo, that's why they are closed.
Right now many technologies like video cards are wandering in the desert, adding on more and more opaque layers rather than providing general purpose computing. So that's set simulation back at least a decade or two and hindered progress in fields like AI. I just want to address one of the main complaints about FPGAs - that they require more chip area to route logic. This is not as big of a deal as it sounds because today 3/4 of a chip's area or more is often dedicated to mundane things like cache. Also since there has been little progress in breaking the 3 GHz barrier since the early 2000s, everything is moving towards higher transistor counts. So the added cost of layout goes down over time and after about 3 years is on par with the previous generation.
So we have a chicken and egg problem where true general purpose parallel computing can't get off the ground because it's perceived as too expensive, but it's too expensive because it hasn't gotten off the ground yet. Breaking that chicken and egg cycle by opening FPGAs could trigger an overnight adoption of them, even greater than when Bitcoin triggered renewed interest in ASICs. About @diamondman's Adapt Framework (github.com/diamondman/adapt) Adapt is an open, modular framework which offers a streamlined approach for JTAG controllers to speak to target devices (currently CPLDs and soon fpgas). Adapt is built to be extensible and currently includes open/reversed drivers for the following Controllers and target devices: - Digilent & Xilinx PC1 (JTag Controllers) & XC2C-256 (CPLD)* *Note: Map files are required for CPLDs to avoid legal recourse FEATURE ROADMAP: 1. Support for Controllers: Currently Adapt only supports JTag but @diamondman intends to support dbw & spi. Patches for other serial protocols are welcome. Support for Target Devices: The next milestone is to support Spartan3 & Spartan6 fpgas 3.
Extending Adapt / Contributing Drivers: Adapt can be extended to additional controllers and target devices -- @diamondman will write a tutorial on how this is done + what the limitations are, if there is enough interest. Please express interest in driver support by replying to this comment thread. @diamondman responded to this in another thread: 'I actually want to hook OpenOCD into my Daemon to bring some sanity to how it manages devices. Several of my IRC friends prefer using closed source tools to open OCD calling open Obsessive Compulsive Disorder, frustrated the license is so strict (mine will be LGPL), and annoyed that everything has to be exhaustively specified in TCL files for it to do ANYTHING.' The TL;DR is OpenOCD's driver model itself is fixed/limited/standardized to a degree which prevented him from, in many cases, optimizing underlying controllers (or getting them to work at all -- e.g.
OpenJTag, I believe). Also, see licensing disagreements above + unreasonable configuration requirements outlined in @diamondman's answer above.
Thinking about this topic some more, it seems hard to gain the critical-mass of contributors for such an open-source tool. Place-and-route for example, requires a very specific set of knowledge in both optimization (comp-sci), and hardware (electrical).
Most of the people who have these skill sets are probably already employed by the major FPGA vendors and under NDA to not contribute to such an open-source tool. I've seen this first-hand having been in the academic space of FPGA research. New masters and phd grads typically go straight to Xilinx or Altera. And without really really good place-and-route, you won't have a competitive tool.
I'm not sure how you would solve this problem. OP here This is a severe problem that saddens me greatly. For now I am just writing tools to make JTAG/SPI/etc loading of chips Ubiquitous to the developer writing software/netlists for them. I do not know enough about the place and route math and routines, nor have I seen enough details of real life chips to know the types of real challenges faced. But I will cross that bridge when I get there (maybe it will be someone else:)). I personally have a huge issue letting anything fundamental I figure out be marked down as the property of a company to have and hold for 20+ years. But I understand the financial rewards and interesting problems are more than enough for many people, so I can not hold it against a student with a PHD worth of debt to need cash.
Hopefully if a good enough extensible base exists, people will add pieces on (whatever they can get away with) over a long time. That is all I can really hope for. I get the idea you are aiming for, but this will not work. GPUs are vector processors. They are good at certain types of repetitive unrelated (parallel) math. They can be good at simulating physics since physics is described with matrix math.
Each level of emulation, simulation, or implementation robs resources. Yes those architectures are more open but they do not do the same thing. More work with those setups so, say, Postgres can offload certain math to the GPU, is great and should be done. But this is somewhat different than FPGAs (though in the same spirit). CPUs, GPUs, and FPGAs all solve different types of problems very well.
But they do not run each other's problems well, and certainly do not implement each other super well. Well, with one exception: you can build a reasonable GPU with shaders in an FPGA, but this is not to out perform the FPGA it is implemented on in a task. If you hardwired the FPGA to run the equivalent operations of a shader in OpenCL running on a GPU implemented in the same FPGA, the FPGA would win hands down. Vivado is good and bad. Vivado's schematic and device views are much better than the earlier FPGA Editor et al.
The interface for adding Chipscope debug signals is quite nice. However there are lingering bugs. The hardware manager crashes a lot if you don't handle your debug nets carefully. My colleague had an issue with constraint priority. I _think_ I've encountered an issue with VHDL synthesis being incorrectly cap-sensitive, but I never bothered to make certain. Overall it's getting there.
The Zynq stuff is neat (once you understand what's going on) but I have some misgivings about the push toward IP cores. Yeah, I'm not a huge fan of IP cores, but they have their place. The biggest thing that I'm a fan of is the push towards standard TCL methods for everything Vivado does or touches.
That's the difference between UCF and XCF. I'm also under the impression, but don't have the experience to say for sure, that there's supposed to intermediate files for every design step and that the command line interface is better documented. I am very impressed by the Zynq stuff. I'm particularly looking forward to seeing people using it for accelerators and such.
According to people I spoke to, the ailppcation time will open from August until January. Then they send the files to the colleges to conduct interviews. The interviews usually happen between February May. You usually hear about the decision 1-2 months after the interviews (Maximum in June).
If you have already applied and obtained an admission from a university, you can travel for Fall 2013. If you didn't apply, you can travel for for Spring 2013 or Fall 2014. You can't work as TA after you study MBA.
You must continue your PhD after MBA right away.Regarding the salary, you will be paid twice. Salary from Kuwait University a monthly allowance to cover living expenses.
I remember the salary is around 900KD and the monthly allowance around $1500 but I think they increased both of them.Good luck! • on 2015-Dec-09 19:24:34 Josue said. For a not-at-all-intimidating option over on 845 W 5th Ave is Coco's Grill tukecd in a strip mall next to a state liquor store and a Burger King. It's a nice little Chinese place that has some good vegetarian dishes, usually included in the Chinese-only daily specials which they'll gladly translate for you.Another place of interest for Chinese would have to be Joy's Village next to OSU just north of Panda Express. I'm a fan of the Indonesian ( Indostea on the menu) fried rice and I'm not even a fan of fried rice. They also have more unusual offerings like pickled pork intestines (OK. I haven't gotten around to ordering those yet) and a good sign is the family and staff eat there.
• on 2015-Dec-10 03:17:44 Raymond said. Just ate at Jui Thai Asian Cafe at 787 Bethel Road (the former laiotcon of Lilly's) and they had a very interesting menu with none of the typical lunch special dishes.
I had the Delicious Pork Cake ($3, very tasty) and the Hot Pot ($6) which had noodles, bok choi, napa, luncheon meat, quail egg, bean skin, and fish ball. The hot pot was saltier and more oily than I anticipated, but was delicious all the same. I plan to return with my family next week to try more dishes.
[url=[link=• on 2015-Dec-12 06:49:25 Keith said. Sadly, the exceptional Hometown Oriental Deli and Carry-out on Bethel has coseld down. The space, as far as I can tell, is transitioning to new owners, who are remodeling the interior, and the Hometown sign was yanked on Sunday.
I have to say that I'm really going to miss Hometown and its' owners they were modest to a fault and and had many consistently stellar dishes that I've never encountered elsewhere. Here's hoping that the new proprietors offer something equally special • on 2015-Dec-14 01:36:22 Memduh said. Any of the Alt Eats mentioned on this weibtse would be great to check out- Columbus has a wide variety of small places serving great ethnic foods. As far as more traditional Columbus-y restaurants, I'd suggest North Star Cafe, Schmidt's Sausage Haus in German Village, Tip Top, Betty's or Surly Girl, Knead, maybe one of the Cameron Mitchell's restaurants like Cap City, Thurman Cafe for a Thurmanburger, or even The Refectory's bistro menu is pretty reasonable. [url=[link=• on 2016-May-12 07:24:11 Mark said.
Comment3, sasol_wax_gmbh_20457_hamburg, 046470, pokemon_text_tones_free, kveegw, skachat_appstore_na_kompiuter, 8-DDD, crack_proxycap_527, 8-[, skachat_uigurskie • on 2017-Feb-09 18:39:44 flyers77 said.