One more group of engineers will be finding themselves out of jobs. Chip layout is a very specialized area - once the computer based systems are capable of doing this job well then the existing engineers in this area are likely to find themselves without much in the way of marketable skills.
The most experienced engineers who do chip layout will be perfectly fine and they'll be able to do their jobs a lot better. This will push more junior engineers towards other roles, where being able to learn is more important than expertise. The sky isn't falling.
You're presuming the number of chips being designed stays constant. However, what if this lowers the cost and increases the benefits of custom silicon to the point that yet *more* chips are produced? Then, the total employment in the sector might stay roughly constant.
Either way, the best move for existing layout engineers is to get trained in using this new generation of EDA tools. Electronic Design Automation has been with us for many decades, this is just the next step.
I have been doing semiconductor digital physical design for 25 years. In that time I have heard of lots of EDA startups and the big companies like Cadence and Synopsys talk about "Single pass timing closure!" and lots of other revolutionary ideas.
I will believe it when I see it.
In 1997 I remember doing the entire physical design for a chip in 3 months with just 3 people. I just finished a chip with 84 blocks on it. It took us 10 months.
My company has a lot of customers that want customized versions of our chips. We turn most of them away because we don't have the resources to do them. It is hard to find good engineers. If this really lets us do a chip in half the time then we will just have even more work because we will start doing all those chips that we didn't have resources for before.
Even if that's true, it's no reason not to do it, and it won't change anything to complain about it. These people will just have to shift, and EEs are very hire-able in many Com-Sci related fields.
FYI when they say they reduced "Total failing timing" by 83%, they most likely don't mean that overall wire delay is reduced by 83% - that would be impossible to achieve. Instead, this is likely referring to a metric known as "Total Negative Slack", which is a measure of how close a chip is to meeting overall timing. Some definitions to break this down:
"Path" - A timing path is any route from one register to another where data needs to get there within a certain amount of time, generally the length of one clock cycle minus some margin. Any given path includes a mix of combinational gates, wires, and/or buffers, not just "wire delay" between cells.
"Slack" - This is the amount by which a given path meets timing (positive slack) or fails timing (negative slack). So for a 2 GHz clock, the period is 500 picoseconds, maybe you have 50 ps of margin, so you want data to get there within 450 ps. If going from register A -> gate X -> gate Y -> gate Z -> register B takes 475 ps, the negative slack for this path is 25 ps.
Then, Total Negative Slack (TNS) means you just take all the failing timing paths and add up the amount by which they're failing. This gives you a measure of how close you are to "closing timing" for the whole chip. Why this is important is that when a chip comes out of the first pass of automated place and route (APR), it will almost never close timing 100% and you have to go through timing optimization iterations to get it there. These iterations can take a long time, and the closer you are to the goal when you start, the more likely you are to get there in the end.
Thanks for the explanation. So, why such margins? Is it driven primarily by manufacturing variations or environmental variations (e.g. voltage, temperature, interference)?
To the extent these factors are understood, I wonder if the tools can't eventually tighten the margins, as well.
Lots of reasons for margin, anything from uncertainty in the way delays are estimated/modeled, random variation in clock frequency (“jitter”), or plain old CYA. The modeling of the path timing does attempt to account for the sorts of variation you mention, but no model is perfect. The problem is differentiating between where it is actually needed and where it isn’t. Lots of effort already goes into this, and the low-hanging fruit has been picked.
P.S. I pulled the example margin amount out of my rear, don’t take it as necessarily representative.
That is pretty much it. Design windows have to allow for variation in the manufacturing process, but they also need to account for circuit aging since that can have a very real impact on threshold voltage, leakage, etc. Margins for timing allow for aging gates that may run very slightly slower to not force corrective changes to whole timing domains (real world example of that would be Apple SoC clock down to extend useable life of older models).
What Apple claims and what Apple does... not always the same.
My venerable anecdote about using extreme bait and switch fraud to sell the original Mac to the tech press is not an isolated example.
If you want a more recent one... the force-feeding of the APFS file system onto Macs using hard disks (the file system is incompatible, causing extreme slowness) was designed, very obviously, to force people to buy replacement machines. It was a ploy that used rubbish about how terrible HFS+ is, due to its age. HFS+ is a million times better for Macs with hard disks.
Thanks for that explanation - I'd read it as a reduction in paths actually failing entirely to meet the required timing for the clock, rather than failing to meet the required margins. That makes much more sense!
lol. For about the first 3rd of the article, I thought the news was they partnered with Cerebras to use their Wafer Scale Engine to run these tools in the cloud.
However, that does raise the question how much faster these tools would work, when run on such hardware. Then, of course, you'd use the tools to *build* the next gen WSE, which can also be used to research new semiconductor materials & fabrication techniques. And pretty soon, we reach the singularity. Or so they say.
I would be a bit skeptical about some of these marketing claims. They tend to cherry pick the best result. They also are likely comparing timing and other metrics to the initial reference design.
Machine learning does offer some interesting technological gains. You have to train a machine learning algorithm. And that takes time. It is not free. To be most accurate, you are training on your existing design or a similar design. But once it is established, it may fix issues found later on. And that often is not easy to do. Modern chip and software design is very complex.
Your skepticism is understandable. However, if these claims prove hollow, their customers will inevitably find out. And that's some serious reputational damage, since these business relationships typically last many years and involve multi-million dollar contracts. So, it's not really in their interest to be too short-term focused, here.
Also, building chips involves such large costs and long timescales that it's unlikely customers won't use caution when approaching any big changes in how they operate.
> You have to train a machine learning algorithm. And that takes time.
They seem to account for that.
> To be most accurate, you are training on your existing design or a similar design.
I'm not sure about that. I think it learns (or at least refines its model) from simulation feedback on its layouts, making it a fundamentally iterative process. This is also what enables you to tweak its priorities (e.g. frequency, power, area).
Yeah I'm pretty sure they are cherry picking results, but the interesting thing here is whether a model trained on a decently large set of designs is going to be able to produce good results on any other given new design. My personal opinion is that this might end up working out pretty well because there are a bunch of design rules, or best practices, which apply to a lot of designs. So I would not be surprised if their system does a decent job of learning those.
I work in the semiconductor industry and I'm personally excited about this because I see it as a way of amplifying engineer productivity, not necessarily taking away work from people. I work at a smaller company, but we are resource constrained so more intelligent and efficient EDA tools would just allow us to take on more work which we may have to refuse because we don't have enough people resources to do everything.
SattaMatkaLive.com: Resgister to World's #1 Matka Site to get 100% fix leak number for Mumbai matka game and weekly jodi number. Only from our site you will get Jackpot number for Mumbai Matka Game.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
19 Comments
Back to Article
Duncan Macdonald - Thursday, July 22, 2021 - link
One more group of engineers will be finding themselves out of jobs. Chip layout is a very specialized area - once the computer based systems are capable of doing this job well then the existing engineers in this area are likely to find themselves without much in the way of marketable skills.Someguyperson - Thursday, July 22, 2021 - link
The most experienced engineers who do chip layout will be perfectly fine and they'll be able to do their jobs a lot better. This will push more junior engineers towards other roles, where being able to learn is more important than expertise. The sky isn't falling.easp - Thursday, July 22, 2021 - link
Job losses will be offset somewhat by a proliferation of designs due to the changing economics brought by these tools.But yeah, this will take a human toll.
mode_13h - Thursday, July 22, 2021 - link
You're presuming the number of chips being designed stays constant. However, what if this lowers the cost and increases the benefits of custom silicon to the point that yet *more* chips are produced? Then, the total employment in the sector might stay roughly constant.Either way, the best move for existing layout engineers is to get trained in using this new generation of EDA tools. Electronic Design Automation has been with us for many decades, this is just the next step.
bobj3832 - Friday, July 23, 2021 - link
I have been doing semiconductor digital physical design for 25 years. In that time I have heard of lots of EDA startups and the big companies like Cadence and Synopsys talk about "Single pass timing closure!" and lots of other revolutionary ideas.I will believe it when I see it.
In 1997 I remember doing the entire physical design for a chip in 3 months with just 3 people. I just finished a chip with 84 blocks on it. It took us 10 months.
My company has a lot of customers that want customized versions of our chips. We turn most of them away because we don't have the resources to do them. It is hard to find good engineers. If this really lets us do a chip in half the time then we will just have even more work because we will start doing all those chips that we didn't have resources for before.
vol.2 - Wednesday, August 4, 2021 - link
Even if that's true, it's no reason not to do it, and it won't change anything to complain about it. These people will just have to shift, and EEs are very hire-able in many Com-Sci related fields.evancox10 - Thursday, July 22, 2021 - link
FYI when they say they reduced "Total failing timing" by 83%, they most likely don't mean that overall wire delay is reduced by 83% - that would be impossible to achieve. Instead, this is likely referring to a metric known as "Total Negative Slack", which is a measure of how close a chip is to meeting overall timing. Some definitions to break this down:"Path" - A timing path is any route from one register to another where data needs to get there within a certain amount of time, generally the length of one clock cycle minus some margin. Any given path includes a mix of combinational gates, wires, and/or buffers, not just "wire delay" between cells.
"Slack" - This is the amount by which a given path meets timing (positive slack) or fails timing (negative slack). So for a 2 GHz clock, the period is 500 picoseconds, maybe you have 50 ps of margin, so you want data to get there within 450 ps. If going from register A -> gate X -> gate Y -> gate Z -> register B takes 475 ps, the negative slack for this path is 25 ps.
Then, Total Negative Slack (TNS) means you just take all the failing timing paths and add up the amount by which they're failing. This gives you a measure of how close you are to "closing timing" for the whole chip.
Why this is important is that when a chip comes out of the first pass of automated place and route (APR), it will almost never close timing 100% and you have to go through timing optimization iterations to get it there. These iterations can take a long time, and the closer you are to the goal when you start, the more likely you are to get there in the end.
mode_13h - Thursday, July 22, 2021 - link
Thanks for the explanation. So, why such margins? Is it driven primarily by manufacturing variations or environmental variations (e.g. voltage, temperature, interference)?To the extent these factors are understood, I wonder if the tools can't eventually tighten the margins, as well.
evancox10 - Friday, July 23, 2021 - link
Lots of reasons for margin, anything from uncertainty in the way delays are estimated/modeled, random variation in clock frequency (“jitter”), or plain old CYA. The modeling of the path timing does attempt to account for the sorts of variation you mention, but no model is perfect. The problem is differentiating between where it is actually needed and where it isn’t. Lots of effort already goes into this, and the low-hanging fruit has been picked.P.S. I pulled the example margin amount out of my rear, don’t take it as necessarily representative.
FullmetalTitan - Saturday, July 24, 2021 - link
That is pretty much it. Design windows have to allow for variation in the manufacturing process, but they also need to account for circuit aging since that can have a very real impact on threshold voltage, leakage, etc. Margins for timing allow for aging gates that may run very slightly slower to not force corrective changes to whole timing domains (real world example of that would be Apple SoC clock down to extend useable life of older models).mode_13h - Sunday, July 25, 2021 - link
> real world example of that would be Apple SoC clock down to extend useable life of older modelsI thought that was to extend battery life.
Don't AMD CPUs or GPUs have some way of measuring circuit aging, and dialing back their boost clocks to compensate?
Oxford Guy - Monday, July 26, 2021 - link
What Apple claims and what Apple does... not always the same.My venerable anecdote about using extreme bait and switch fraud to sell the original Mac to the tech press is not an isolated example.
If you want a more recent one... the force-feeding of the APFS file system onto Macs using hard disks (the file system is incompatible, causing extreme slowness) was designed, very obviously, to force people to buy replacement machines. It was a ploy that used rubbish about how terrible HFS+ is, due to its age. HFS+ is a million times better for Macs with hard disks.
Spunjji - Monday, July 26, 2021 - link
Thanks for that explanation - I'd read it as a reduction in paths actually failing entirely to meet the required timing for the clock, rather than failing to meet the required margins. That makes much more sense!mode_13h - Thursday, July 22, 2021 - link
lol. For about the first 3rd of the article, I thought the news was they partnered with Cerebras to use their Wafer Scale Engine to run these tools in the cloud.However, that does raise the question how much faster these tools would work, when run on such hardware. Then, of course, you'd use the tools to *build* the next gen WSE, which can also be used to research new semiconductor materials & fabrication techniques. And pretty soon, we reach the singularity. Or so they say.
chipxman7 - Friday, July 30, 2021 - link
I would be a bit skeptical about some of these marketing claims. They tend to cherry pick the best result. They also are likely comparing timing and other metrics to the initial reference design.Machine learning does offer some interesting technological gains. You have to train a machine learning algorithm. And that takes time. It is not free. To be most accurate, you are training on your existing design or a similar design. But once it is established, it may fix issues found later on. And that often is not easy to do. Modern chip and software design is very complex.
mode_13h - Saturday, July 31, 2021 - link
Your skepticism is understandable. However, if these claims prove hollow, their customers will inevitably find out. And that's some serious reputational damage, since these business relationships typically last many years and involve multi-million dollar contracts. So, it's not really in their interest to be too short-term focused, here.Also, building chips involves such large costs and long timescales that it's unlikely customers won't use caution when approaching any big changes in how they operate.
> You have to train a machine learning algorithm. And that takes time.
They seem to account for that.
> To be most accurate, you are training on your existing design or a similar design.
I'm not sure about that. I think it learns (or at least refines its model) from simulation feedback on its layouts, making it a fundamentally iterative process. This is also what enables you to tweak its priorities (e.g. frequency, power, area).
MetalPenguin - Sunday, August 1, 2021 - link
Yeah I'm pretty sure they are cherry picking results, but the interesting thing here is whether a model trained on a decently large set of designs is going to be able to produce good results on any other given new design. My personal opinion is that this might end up working out pretty well because there are a bunch of design rules, or best practices, which apply to a lot of designs. So I would not be surprised if their system does a decent job of learning those.I work in the semiconductor industry and I'm personally excited about this because I see it as a way of amplifying engineer productivity, not necessarily taking away work from people. I work at a smaller company, but we are resource constrained so more intelligent and efficient EDA tools would just allow us to take on more work which we may have to refuse because we don't have enough people resources to do everything.
Satta matka live - Tuesday, August 3, 2021 - link
SattaMatkaLive.com: Resgister to World's #1 Matka Site to get 100% fix leak number for Mumbai matka game and weekly jodi number. Only from our site you will get Jackpot number for Mumbai Matka Game.