papermaster. what a perfect fit or a company that is fantastic at meaningless paper launches. thanks for not sharing if it will be am3+ compatible, amd. another snub to loyal customers.
nice try, trollie. am3+ launched in q4 2011. customers were promised am3+ variants of steamroller and excavator that aren't going to be delivered. go suck a butt.
He has a point, AM3 was released in 2009, AM3+ has to maintain backwards compatibility with that. If they're building a brand new core for 2016+ I would hope for DDR4 support in the least which would necessitate a new socket.
Plus I expect this will have a built-in GPU anyway so AM3+ support would be out the door regardless, maybe FM2+ but then you still have the DDR4 issue. AMD is going to need a new socket.
backwards compatibility with am3 isn't a factor. most am3+ procs don't work on older am3 boards. some do with bios updates, but many do not. am3+ is a 2011 product, and steamroller/excavator procs were promised for the socket. amd could keep promises with new piledriver releases in 2014/15, or sempron variant of this new 2016 product. no dispute that a new socket design and ddr4 support are needed.
Yell at me all you want for backing Intel's no-upgrade-path strategy, but were IVB and Haswell still stuck on LGA775 we probably wouldn't be where we are today in terms of performance and efficiency.
It's time for AMD to move on. AM3 and AM3+ are getting old, and only bring up painful memories of FX.
You can't expect AM3 it to be forever current in light of the move to HSA, APUs and DDR4, all of which is happening right now. It's time for a new socket.
It's going to happen sooner or later... they might even have a variant of Carrizo that is compatible with DDR4. The extra bandwidth is beneficial to their APUs, so saying it's not on their agenda is kind of silly. Even if it doesn't happen with Carrizo, it will happen with it's successor.
It would also help tremendously on their smaller low-power APUs (single channel), since it is both lower voltage and higher bandwidth.
it is a new socket. pinouts and amps aren't the same. that quite literally makes it different. 2011 isn't the same number as 2009. this isn't that hard.
All current sockets at dead as soon as AMD switches to DDR4 as there is no worthwhile compatibility between the two technologies. I guess delusional people could hope and believe that AMD will ham string themselves by wasting valuable die space on a dead end DDR3 memory control to appease the microscopic number of people who only upgrade there cpu or motherboard not both at the same time 2 years from now when will likely have PCIe 4, DDR4, USB3.1, and SATAExpress making anyone who owns a AM3+ board currently seriously obsolete
Fantastic at meaningless paper launches? Just for your info, the article is about AMD, not Nvidia. ;) And AM3+ is dead. People know that for quite some time. Welcome to 2014! It's really naive to expect a desktop socket to live more than 5 years.
for more info on paper launches, see kaveri. thanks for telling me what year it is. very useful info. more than 5 years, you say? naive, eh? by your maths, it's already been 5 years -- seeing that you can't discern the difference between am3 and am3+, and 2009 vs. 2011. with piledriver production slated to continue through 2015, that would be more than 5 years, by your counting. got any more pearls of wisdom?
In 2016 it will be 5 years for am3+, don't know if he meant that but this is only a roadmap like intel and Nvidia releases often to show us where they are going.We rarely get any precise information on these roadmaps.
The arguments ''go suck a butt'' and ''tell your mom to stop calling me'' should get you banned because this is a proof that your argumentation is weak and that you feel attacked by the computer hardware opinions like a ''groupie''...
Got any more pearls of wisdom like the two arguments I named above?
I'm nowhere near a crusade about banning internet bullying. I just think this has no reason to exist in forums like this one. This is a roadmap and you're making allusions about details AMD should give that no other company does with roadmaps like these.
And to be frank no one gives a whit regarding your apparent butt hurt. It's about time AMD did a ground up design as opposed to making those still chugging along with a Phenom II board happy.
"Don't approve" buy Intel they change sockets like you "should" change your underwear
That's OK. x86 performance increases have slowed to the point where it's not as hard to catch up. AMD will reach performance parity with Intel sometime within the next 2 - 3 years because it will cost Intel more than the market can bear to stay ahead of AMD. It's already costing more than the market can bear to significantly increase x86 speeds, which is why the x86 performance increase curve is flattening out (witness the stagnation in single core performance increases in the past 2 - 3 years).
Lack of competition from a CPU performance perspective. There has been clear and measurable advances in power efficiency. AMD is behind in process technology, which has always been an issue. AMD has been behind in other technologies such as High K metal gates and finfet blah blah 3d transistors where Intel has had a lead in.
Telling me AMD will be caught up in 2-3 years has not ever in the past and likely will not ever take place. Pointless optimism.
^ and amazingly they still manage to sell products that I'd like to buy. The high-end APUs are really impressive and a lot of fun to tune. The new affordable Kabini / Beema architecture is an excellent platform affordable NUCs, basic computing and some light duty HTPC work. Intel is an efficiency king but doesn't service all parts of the market well.
as an owner of i7-4770K and A10-7850K, I can say both are equally fun to tinker around.
Yes, the A10 is impressive if viewed sans intel, but the only reason I bought the kaveri is because microcenter is selling it for $130. at the 190 or so MSRP, I would've gone elsewhere.
I also have the A6-1450 laptop which is amazingly good for the price, but again, price being the key-word here. I much rather have competitive AMD designs at comparable price/profit margin with intel. It is hard to build a sustainable business based on low prices...
So what's your usage patterns that requires a super-fast CPU nowadays. My game of Bioshock Infinite didn't hiccup when I use an A10 - 5800K when paired with a real graphics card, and it's plenty fast for video conversion.
rofl, are you gonna tell us you only ever fit $1000 intel chips in your boxes? AMD have been price/performance compatible in many of their generations since 2006, all except very high end.
I'm telling you that AMD has never shown signs of catching up with Intel in x86 performance since the release of Conroe/Merom/Woodcrest almost eight years ago.
Hence my scepticism at the OP's prediction that they will be caught up in "2-3 years."
"AMD will reach performance parity with Intel sometime within the next 2 - 3 years"
That's what AMD fans have been saying since Conroe. It ain't happened since then, it ain't gonna happen in the future. Intel is ahead, they know it, and after the P4 fiasco they won't ever give up their lead in the x86 race.
That's been said before, back in the Socket 7 era. Then AMD caught up with the Athlon and pulled ahead with the Athlon XP and stayed competitive up until the Core 2 was released. You can't write off AMD yet. Intel has been dragging their heels on CPU performance since Sandy Bridge so there is a potential opening there.
The problem is that Moore's law is dead. If AMD gets in shape now they can reach right up there but the architects and more importantly Global Foundries is where the real problem lies
As a response to your comment, and to many of the comments above yours: the difference is that since Sandy Bridge, the speed at which Intel is moving forward on x86 performance has slowed, the first time in the history of x86 that this has happened. So the game is different now. Intel is moving ahead more slowly, and it will cost AMD less to catch up now.
As Flunk below mentions, AMD actually already did catch up and surpass Intel once, when Intel stumbled. Now, in 2014 (and since 2011-ish, really), Intel hasn't stumbled so much as they've simply hit a performance improvement wall. Now that Intel isn't moving forward so fast, AMD can catch up.
You think Intel has slowed progress on performance because they can't progress faster. Truth is they don't have to so they stall on purpose to milk consumers. I thought it was obvious myself.
I think you're wrong. The desktop market is stagnating, on the verge of shrinking if it isn't already. x86 CPUs have been more than fast enough for casual uses for at least 6 years now. They've been more than fast enough for moderately stressful work for more than 3 years now. There are very few workloads, used by very few people, that can't be satisfactorily handled by a $200 CPU from three years ago. Finally, the lowest hanging fruit have already been picked over and over again in the quest for improving x86 performance. The node shrinks have just about hit an impossible, or at least very diffficult to breach, wall.
All of this means that it is now prohibitively expensive to increase x86 performance substantially. Doing so demands more money than the market has to offer Intel for doing so. Sure, increases will happen, but only at a pace that can be achieved with lesser investment as demanded by lesser market interest in x86.
Lower power, on the other hand - that is an area where the advances are easier to make, at least for now, and consumer demands are high, given that more x86 CPUs are going into notebooks now than desktops.
Intel *can't* progress any faster, not while investing in R & D that is actually expected to have payoff. But AMD? AMD still has lower hanging fruit to pick. They can get down to the same process node as Intel, piggybacking off of demand from other segments of the industry that provide the dollars necessary to update fabs to tech competitive with Intel. They can exploit new ideas in CPU architecture (new to them, probably old to Intel) to bring their IPC up to par with Intel.
It will happen, not because AMD will magically become better at competing with Intel, but because Intel will hit the wall first (and is already hitting it form a performance increase perspective) and AMD will have the benefit of the trail already being blazed for them, and having to spend less money to get to that wall. It costs more to be the frontrunner than to follow the frontrunner. And so once the frontrunner can no longer run faster, the one trailing it will catch up.
Eh, it is entirely possible. Since Sandy, Intel has been reaching for the highest reaching fruits to try to get improvement, they have got very little.
No reason why AMD could not get its own version of Conroe... Which, was based off of parts of PentiumM and P4 (and some other stuff, ofc)
Unless Intel is willing to make a new architecture, ground up, there is not much performance gains to be seen from its Big cores. Now, power efficiency improvements, that is a whole different story :)
AMD outperforms Intel's low power Atom cores with their low power Cat cores. High-end cores aren't everything. 99.9% of the market don't need processors like i7-4770K. And I don't need to mention iGPUs. So, in 2 of 3 fields AMD isn't on par. They are ahead of Intel in terms of performance. Some people really should take a look beyond Intel's propaganda machine.
Working on similar projects like these might not mean double the required number of people - there might be synergy between the projects, at least with regards to some building blocks. besides, they are hedging their position.
At some point ARM will hit desktop in larger numbers, allowing for even cheaper computers. A complete quad core 2+ GHz ARM system should be doable at lower cost than the price of a quad core Intel CPU, using less power (and, admittedly, offering lower performance ... but how much performance is really needed for majority of workloads?).
No, he's right. The RISC vs. CISC wars have ended a long time ago, and vast portions of modern CPU architectures are actually incredibly similar. For example, AMD can utilize the same components like the IMC, loop buffers, and on-die GPU across what are "different" architectures.
And we've seen this before with AMD and Intel both. AMD used the branch predictor from Jaguar/Bobcat in their Piledriver cores because it was more efficient, and we all remember how Intel transitioned and borrowed from their Pentium-M.
The worry of splitting resources is still there though, given AMD's significantly smaller R&D size. But that 'synergy' buzzword isn't just a buzzword. I'm far more worried about execution in a timely manner, as AMD has dropped the ball repeatedly there in the past few years.
in the past few years (since BD release) the only large issue AMD had was with Kaveri launch.
AMD is getting better and better at execution.
Not enough to stop worry, but, enough to where it is "AMD could screw up this execution" instead of "Well, we will see this coming in 2017 if we are lucky"
The A8-7600, imo the pick of the litter, has been pushed back to the tail end of the year. But remember that Kaveri was meant to launch last year but was replaced by Richland, which was just Trinity with a clock speed bump.
Then there's Kabini/Temash, which started out in life as Wichita/Krishna, were then redesigned and ported to TSMC's 28nm, and then back to 28nm GloFo, and finally replaced by Beema and Mullins.
AMD has definitely had execution problems, and it's been this way for as long as I can remember. This isn't to say that other semicos don't, but AMD is already in a precarious position with end customers and OEMs alike skeptical regarding execution and timeline. Unfortunately, it's a reputation that's well deserved.
ARM will never be put in a desktop. ARM is optimized for low power usage, if you try to do desktop workloads with it you end up using more power than the equivalent x86 part. I'm pretty sure one of these three articles that should have really been one article states they have no intent on ARM desktop parts.
What in the world does the ISA have to do with 'optimized for low power usage'? You're aware that there's no technical reason ARM can't match x86 or even surpass it, right? The reason ARM is focused on low power today is because historically that's where they've made their money -- embedded products tend to require low power and supreme efficiency. Currently ARM dominates the mobile market, again a market that requires low power and supreme efficiency. Nothing's stopping Qualcomm from designing a giant ARM core that chews through power but offers great performance at the top end. In fact, Apple is already sorta making that step ;P
I am writing this on an ARM laptop with rather decent performance. Arm chips may be historically optimized for low power usage, but that is not an inherent trait of the instruction set.
ARM can also be optimized for high performance. AMD will develop their own ARM design to launch in 2016. So, there is definitely the opportunity to offer desktop systems based on ARM. It's not a question of hardware. It's a question of software support, especially the OS.
Indeed, Windows 8.2 will offer out of the box support for ARM and everything that AMD is working on. The system will be able to handle workloads never thought possible with minimal overhead
that's what I think as well. These might be nearly identical cpu's from an architecture point of view, one with an arm instruction set decoder and one with an x86 instruction set decoder, both of which will likely map to some internal instruction set the cpu actually understands. Seems pretty reasonable to me. Might even be able to use the same motherboards and everything.
I expect you're right, the only difference will be on the front-end with the back-end being the same. All modern x86 CPUs are RISC on the back-end now.
An FX Steamroller or Excavator might yield some surprises, should AMD ever release either. However, this may only be true in terms of parallelism and not clock speed as the process doesn't seem to support the latter.
In all fairness (and I'm all for ARM being good competition to x86) but for majority of desktop workloads one DOES need more than what a 2ghz quadcore arm soc can offer atm. Especially if its a Krait core seeing how Quallcomm is kinda following the lower performance per clock strategy atm.
I don't think it's lack of competition, just that their competition has changed. Low power use for longer mobil life, or cheaper data-center power bills has made ARM popular for both mobil devices and the server farms that feed them, add in all the embedded systems and it's not just the average user who is thinking that a relatively slow ARM core is "good enough" for what they need.
Intel's processors have sacrificed large performance gains in the fight to reduce power consumption. AMD has almost given up on the drive to being the most powerful in order to try and bring in that "right sized" APUs while also increasingly trying to push power useage down in a bid to match Intel with options in the mobil space.
AMD has a shot at being relevant again. Thanks to mobile billions of dollars are being poured into independent foundries, which gives AMD better options than ever. Intel might still have a process-lead, but that lead matters less than it used to do.
Intel hasn't hit much of a performance wall. They just have been choosing to funnel their improvements in to reducing power consumption, moving things on chip and improving GPU peformance.
This has been at the expense of increasing CPU performance much, but look at performance per watt. Haswell is HUGELY beyond Sandy when you take that in to account.
That I think is going to be where catching up is going to be hard. Power consumption matters to a lot of people. In mobile it is obvious, but it also ties in to the server market too and even home market with things like NUCs and HTPCs (and power bills).
Intel just needs to decide to skip pouring process improvements in to reducing power consumption and drop it in to improving CPU performance for one tic-tock cycle and they'd probably have another 20-50% increase in performance over generation or two.
Haswell-E (or is it Broadwell-E) is already confirmed for Octocore processors. That right there will be a nice jump. I'd be a little suprised if at some point Intel doesn't drop hexacore in to at least their high level non-E line up (even if they are still i7 only). Might take Skylake, might be a generation after, but I'd be surpised if it doesn't show up sooner rather than later.
Really when it comes down to it, I want more CPU performance too, but I care a lot more about the power savings Intel has been dooling out, I just wish they could balance it across their platform with more performance targeted desktop architectures and more power oriented mobile architectures.
Haswell isn't that much better at performance per watt under load. The power efficiencies mostly come from power gating under idle or semi idle conditions.
If Intel was willing to stick max power consumption on its products, I might believe you (At the high end. On low usage scenarios, Intel has had huge improvements)
However, Intel's solutions in mobile... Well, here (http://www.notebookcheck.net/Review-Asus-Zenbook-I... we have a 28W i7 (Iris Pro 5100) 8GB RAM, 1440p 13.3" screen. It used under 52.2W in stress testing... due to throttling.
The CPU in not capable of runnings its GPU at max clock + CPU at base clock while doing demanding tasks. They found 2.4Ghz + 900Mhz (note: CPU base == 2.8Ghz, GPU is technically 200 base, 1200 max)
I would just like to note, Intel consistently breaks their TPDs on notebooks that are not from Apple (on the high end) which I suspect is due to Apple implementing much better throttling than its competitiors.
testbug00, good point but 2 problems. 1) TDP is about "heat" not "power". 2) That was the whole system including memory, drive, bus, screen, etc, not just the CPU.
In my biased opinion laptops are not really suited for full-throttle operations for more than a few minutes at a time. They get too hot, too loud, lack proper ventilation, parts wear out sooner, and they are still slower than lower priced desktops.
Intel has done a great job of reducing power requirements on the low end over the past couple of years. It is as the high end where heat is less of an issue where Intel has slacked off. Maybe Intel is hoping AMD will catch up because if AMD were to go out of the CPU business, the trust busters would be after Intel in a heartbeat.
They have no competition in the performance segment, and lowering power requirements assures that ARM can't creep into desktop. Once again I say, AMD killed the x86 market by over-leveraging to buy ATI and being unable to compete due to finances.
I think Intel have hit a wall. Otherwise they wouldn't have done a crude 77W Ivy Bridge if a usual 95W version would have been more efficient. Intel's clocks are nearing the limit. Even new processes with technology as FinFET cannot change this. And IPC improvements are getting more and more difficult because the core is already highly optimized. Even going from a 6-wide to 8-wide backend couldn't help much.
This just refers to their "small cores", their semi-custom track and so on. Big cores will have a follow up too we can guess. Might look different though. Really expected for them to continue small cores any how.
Larrabee is in Xeon Phi, so Larrabee was delivered, but not for you, and, indeed, not in its original form promised.
But I agree, the project to deliver x86 graphics device aka Larrabee had failed (because it was not a good idea from the very beginning), so, instead, the results were reused as a compute accelerator.
I don't know if going big on ARM is the right choice for AMD. I understand that's where the market is going and AMD thinks it can earn some money there but ... at the same time it's giving up markets where it still can be competitive. Like the high end server market. Considering the transistor density of GF's 28nm SHP node, AMD could have released a monolithic 8 module Steamroller Opteron this year. Plus '32-core' high-end models. Having a high performance 32 core Opteron in 2014 is way better than having a possible hit in 2016. It feels like AMD is admitting defeat to Intel but does it really think it can take on Qualcomm, Broadcom, ... and even still Intel? Qualcomm is going to have it's custom 64-bit ARM core ready by that time too. Broadcom already has a big portfolio of low-power MIPS server SOCs. The market AMD is entering, is going to be even more saturated.
It seems that AMD completely abandoned their "Big" multicore Opterons after Piledriver-based 6300 Series, as of now. It was probably a business decision taken on the very top of their management - presumably, they decided that it's not worth it to continue, thus surrendering big core server x86-64 market to Intel altogether, as of now. IMHO, it's indeed not a very good idea, because, anyway, their big double die 6100/6200/6300 Opterons run at lower frequencies, (~2.5-3.5 GHz), so that new 28 nm process used for Kaveri would be good in terms of power consumption for such a frequency range. And they already have new Steamroller core up and running on the market in Kaveri since January. Seems like while hesitating to reiterate the big Opterons to 28 nm & Steamroller (which seems to be clearly doable technically), they shot in their foot once again. Or maybe they just don't have enough manpower to do it simultaneously with other things. It's a pity.
Blind you all are. 2014-2 chips. 2015 one chip with one of 2 cores 2016-one chip supporting 2 instruction sets. Ambidextrous. Keller says ARM has more registers and less decode circuitry. So bolt on the x86 decoder and there you go. Of course that is a gross over simplification. But why the images, why "ambidextrous", and how else could little AMD do 2 high performance designs at once? Go ask them and see if they deny it.
Jim Keller designed the A series of chips for iPhone, this guy is legendary. The next generation of chips for AMD will be the best the market have seen thus farm and to answer phkahler AMD doesnt have to deny or confirm a thing - WHAT WE DO KNOW is that we're getting much, MUCH better chips in the next 3 years and there's nothing Intel can do about it
Finally some good news for me. Hopefully the Excavator-core based APU/CPU will be a decent upgrade from Kaveri. If not making a FM2+ compatible "High Performance" 64-bit x86 CPU/APU would be all I ask for. I don't care if it's a CPU without a IGP or an APU with an IGP. I just want a stronger FM2+ compatible processor that is 6-8 cores. Xbox One and Playstation 4 uses custom AMD 8-Core processors. Newer games will probably utilize more than 4 cores in the near feature. I mean Watch Dogs requirements seem to show that. Unless AMD's IPC gets similar with Intel they need more cores for performance. FPU/Compute technology is great, but games (my usage) are CPU intensive in some cases, and AMD APUs/CPUs don't cut it at 4-cores. Show me a miracle AMD.
2015-17 is too far away. This was the worst time to start building a desktop, and FM2+ was a bad choice. Going AMD was a bad choice for now as the future isn't clear.
AMD has been laying off their design team in Austin in large numbers so I am curious on how they plan on developing two new ISA designs. There is sure to be some re-use in the SoC but the most complex portion is the CPU. ARM and x86 ISAs are very different so it is not as easy as changing the decoder. There are very different instructions for load/store, security so I can't see much re-use in the CPU. Maybe the adders, multipliers and high level micro-architecture widgets but it is two separate efforts. With a reduced team. Jim Keller is no superman - far from it in my opinion so given their execution track record, they are setting themselves up for a huge challenge here.
Jim Keller is the greatest don't talk down the guy. When AMD scales up his designs to the PC space great things will happen, Don't come in here talking trash just because you typed on a keyboard plugged into a box that says iNtel Inside. Be level headed and see that AMD is putting out the real trash, folks like you, and getting people who've been in the business have KNOWLEDGE and EXPERIENCE about engineering the NEXT BIG THING in x86 and ARM.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
112 Comments
Back to Article
whickywhickyjim - Monday, May 5, 2014 - link
papermaster. what a perfect fit or a company that is fantastic at meaningless paper launches. thanks for not sharing if it will be am3+ compatible, amd. another snub to loyal customers.MartinT - Monday, May 5, 2014 - link
Yeah, because it's at all reasonable to expect a 2016 clean sheet CPU design to be hobbled by compatibility with a 2009 socket.What a load of bull.
whickywhickyjim - Monday, May 5, 2014 - link
nice try, trollie. am3+ launched in q4 2011. customers were promised am3+ variants of steamroller and excavator that aren't going to be delivered. go suck a butt.MartinT - Monday, May 5, 2014 - link
Even if that were true, which I'm not sure it is, wouldn't you still be a fool for believing a promise made by AMD?whickywhickyjim - Monday, May 5, 2014 - link
more trolling statements, eh, trollie? tell your mom to stop calling me.BPM - Tuesday, May 6, 2014 - link
come on.there's no place for this in here.Gizmosis350k - Wednesday, May 7, 2014 - link
I agree, I for one am glad to see some good news coming out of AMD for onceericore - Thursday, May 8, 2014 - link
time to move out of mom's place whickywhickyjimwhickywhickyjim - Thursday, May 8, 2014 - link
nah, your mom's place is alright. she makes good meatloaf.cowzzwoc - Thursday, May 8, 2014 - link
You are the most vile person on anandtech, congratulationsMadAd - Monday, May 5, 2014 - link
I dont see any promises, just an announcementFlunk - Monday, May 5, 2014 - link
He has a point, AM3 was released in 2009, AM3+ has to maintain backwards compatibility with that. If they're building a brand new core for 2016+ I would hope for DDR4 support in the least which would necessitate a new socket.Plus I expect this will have a built-in GPU anyway so AM3+ support would be out the door regardless, maybe FM2+ but then you still have the DDR4 issue. AMD is going to need a new socket.
whickywhickyjim - Monday, May 5, 2014 - link
backwards compatibility with am3 isn't a factor. most am3+ procs don't work on older am3 boards. some do with bios updates, but many do not. am3+ is a 2011 product, and steamroller/excavator procs were promised for the socket. amd could keep promises with new piledriver releases in 2014/15, or sempron variant of this new 2016 product. no dispute that a new socket design and ddr4 support are needed.tabascosauz - Monday, May 5, 2014 - link
Yell at me all you want for backing Intel's no-upgrade-path strategy, but were IVB and Haswell still stuck on LGA775 we probably wouldn't be where we are today in terms of performance and efficiency.It's time for AMD to move on. AM3 and AM3+ are getting old, and only bring up painful memories of FX.
ericore - Thursday, May 8, 2014 - link
No point arguing with whickywhickyjim, he has no reason, just beliefs.Let the sailor sail.
Gizmosis350k - Friday, May 9, 2014 - link
and sail he shallWieRD_WoLF - Tuesday, May 13, 2014 - link
And then there is an issue that AMD is still using HyperTransport 3.1 (their latest standard) which was last updated around 2008-2009.Though AMD has only every used up to a 16-bit wide link whereas HyperTransport 3.1 allows up to a 32-bit link (think Intel's DMI or QPI)
MikeMurphy - Monday, May 5, 2014 - link
You can't expect AM3 it to be forever current in light of the move to HSA, APUs and DDR4, all of which is happening right now. It's time for a new socket.Gizmosis350k - Wednesday, May 7, 2014 - link
Colour me pink and call me a tit but I dont think DDR4 is on AMDs agendaAlexvrb - Saturday, May 10, 2014 - link
It's going to happen sooner or later... they might even have a variant of Carrizo that is compatible with DDR4. The extra bandwidth is beneficial to their APUs, so saying it's not on their agenda is kind of silly. Even if it doesn't happen with Carrizo, it will happen with it's successor.It would also help tremendously on their smaller low-power APUs (single channel), since it is both lower voltage and higher bandwidth.
Gizmosis350k - Saturday, May 10, 2014 - link
I guess so, technically we can see the high bandwidth ecosystem with the PS4, once they shrink dies we should be okmickulty - Sunday, May 11, 2014 - link
DDR4 means more bandwidth, which is high-priority for AMD with their HSA/APU agenda.3DVagabond - Saturday, May 24, 2014 - link
Agreed. By the time this comes along they;ll be using on die memory (HBM).SunLord - Tuesday, May 6, 2014 - link
AM3+ will be stone cold dead in 2016 as DDR4 will break all compatibility due to its completely different physical architecturegruffi - Tuesday, May 6, 2014 - link
AM3+ is just a new revision of AM3. It's not a new socket. AM3 was launched in 2009.whickywhickyjim - Tuesday, May 6, 2014 - link
it is a new socket. pinouts and amps aren't the same. that quite literally makes it different. 2011 isn't the same number as 2009. this isn't that hard.Gizmosis350k - Monday, May 12, 2014 - link
AMD should have an enthusiast socket and a mainstream socketNoe7119 - Friday, May 9, 2014 - link
AMD is AWESOME. Stop CRYING.....FOOLSunLord - Tuesday, May 6, 2014 - link
All current sockets at dead as soon as AMD switches to DDR4 as there is no worthwhile compatibility between the two technologies. I guess delusional people could hope and believe that AMD will ham string themselves by wasting valuable die space on a dead end DDR3 memory control to appease the microscopic number of people who only upgrade there cpu or motherboard not both at the same time 2 years from now when will likely have PCIe 4, DDR4, USB3.1, and SATAExpress making anyone who owns a AM3+ board currently seriously obsoleteGizmosis350k - Wednesday, May 21, 2014 - link
The men who own AM3 boards will be just fine, because they will have Phenom II X4 965s songruffi - Tuesday, May 6, 2014 - link
Fantastic at meaningless paper launches? Just for your info, the article is about AMD, not Nvidia. ;) And AM3+ is dead. People know that for quite some time. Welcome to 2014! It's really naive to expect a desktop socket to live more than 5 years.whickywhickyjim - Tuesday, May 6, 2014 - link
for more info on paper launches, see kaveri. thanks for telling me what year it is. very useful info. more than 5 years, you say? naive, eh? by your maths, it's already been 5 years -- seeing that you can't discern the difference between am3 and am3+, and 2009 vs. 2011. with piledriver production slated to continue through 2015, that would be more than 5 years, by your counting. got any more pearls of wisdom?Galidou - Thursday, May 8, 2014 - link
In 2016 it will be 5 years for am3+, don't know if he meant that but this is only a roadmap like intel and Nvidia releases often to show us where they are going.We rarely get any precise information on these roadmaps.The arguments ''go suck a butt'' and ''tell your mom to stop calling me'' should get you banned because this is a proof that your argumentation is weak and that you feel attacked by the computer hardware opinions like a ''groupie''...
Got any more pearls of wisdom like the two arguments I named above?
whickywhickyjim - Thursday, May 8, 2014 - link
yes. i'm not convinced that you understand the concept of an argument....also, have fun on your crusade to ban commentors you disapprove of.
Galidou - Friday, May 9, 2014 - link
I'm nowhere near a crusade about banning internet bullying. I just think this has no reason to exist in forums like this one. This is a roadmap and you're making allusions about details AMD should give that no other company does with roadmaps like these.Gizmosis350k - Wednesday, May 21, 2014 - link
Exactly, but, we should remember that if whicky doesn't like something, it's not allowed to existMLSCrow - Friday, May 23, 2014 - link
Wow, this whickywhickyjim guy is a ridiculous, huh? I'm showing everyone in the office his posts. This is priceless. LOAO!Gizmosis350k - Wednesday, May 28, 2014 - link
Glad you're having fun bro, but the Intel Kool Aid is addictiveJinx50 - Friday, May 30, 2014 - link
And to be frank no one gives a whit regarding your apparent butt hurt. It's about time AMD did a ground up design as opposed to making those still chugging along with a Phenom II board happy."Don't approve" buy Intel they change sockets like you "should" change your underwear
Gizmosis350k - Sunday, June 1, 2014 - link
$$coburn_c - Monday, May 5, 2014 - link
Less staff than ever, farther behind than ever, and now they will split their resources...bji - Monday, May 5, 2014 - link
That's OK. x86 performance increases have slowed to the point where it's not as hard to catch up. AMD will reach performance parity with Intel sometime within the next 2 - 3 years because it will cost Intel more than the market can bear to stay ahead of AMD. It's already costing more than the market can bear to significantly increase x86 speeds, which is why the x86 performance increase curve is flattening out (witness the stagnation in single core performance increases in the past 2 - 3 years).coburn_c - Monday, May 5, 2014 - link
Lack of competition is the cause of the performance malaise.eanazag - Monday, May 5, 2014 - link
Lack of competition from a CPU performance perspective. There has been clear and measurable advances in power efficiency. AMD is behind in process technology, which has always been an issue. AMD has been behind in other technologies such as High K metal gates and finfet blah blah 3d transistors where Intel has had a lead in.Telling me AMD will be caught up in 2-3 years has not ever in the past and likely will not ever take place. Pointless optimism.
MikeMurphy - Monday, May 5, 2014 - link
^ and amazingly they still manage to sell products that I'd like to buy. The high-end APUs are really impressive and a lot of fun to tune. The new affordable Kabini / Beema architecture is an excellent platform affordable NUCs, basic computing and some light duty HTPC work. Intel is an efficiency king but doesn't service all parts of the market well.PEJUman - Monday, May 5, 2014 - link
as an owner of i7-4770K and A10-7850K, I can say both are equally fun to tinker around.Yes, the A10 is impressive if viewed sans intel, but the only reason I bought the kaveri is because microcenter is selling it for $130. at the 190 or so MSRP, I would've gone elsewhere.
I also have the A6-1450 laptop which is amazingly good for the price, but again, price being the key-word here. I much rather have competitive AMD designs at comparable price/profit margin with intel. It is hard to build a sustainable business based on low prices...
Gizmosis350k - Wednesday, May 7, 2014 - link
Intel needs competition but as we can see AMD is hellbent on making each transistor the size of JupiterMartinT - Monday, May 5, 2014 - link
I'd be more easily convinced on that if AMD had shown any sign of catching up since the Summer of 2006.calyth - Monday, May 5, 2014 - link
So what's your usage patterns that requires a super-fast CPU nowadays. My game of Bioshock Infinite didn't hiccup when I use an A10 - 5800K when paired with a real graphics card, and it's plenty fast for video conversion.MadAd - Monday, May 5, 2014 - link
rofl, are you gonna tell us you only ever fit $1000 intel chips in your boxes? AMD have been price/performance compatible in many of their generations since 2006, all except very high end.MartinT - Monday, May 5, 2014 - link
I'm telling you that AMD has never shown signs of catching up with Intel in x86 performance since the release of Conroe/Merom/Woodcrest almost eight years ago.Hence my scepticism at the OP's prediction that they will be caught up in "2-3 years."
Alexey291 - Tuesday, May 6, 2014 - link
and yet AMD hasn't had a "Mid-end" (ahem) CPU for about 6 or so years. They were all strictly "below 200usd" mark.Gizmosis350k - Wednesday, May 7, 2014 - link
If all he does is buy $1k chips and shit on AMD, he must be living life in the fast laneThe_Assimilator - Monday, May 5, 2014 - link
"AMD will reach performance parity with Intel sometime within the next 2 - 3 years"That's what AMD fans have been saying since Conroe. It ain't happened since then, it ain't gonna happen in the future. Intel is ahead, they know it, and after the P4 fiasco they won't ever give up their lead in the x86 race.
Flunk - Monday, May 5, 2014 - link
That's been said before, back in the Socket 7 era. Then AMD caught up with the Athlon and pulled ahead with the Athlon XP and stayed competitive up until the Core 2 was released. You can't write off AMD yet. Intel has been dragging their heels on CPU performance since Sandy Bridge so there is a potential opening there.Gizmosis350k - Wednesday, May 21, 2014 - link
The problem is that Moore's law is dead. If AMD gets in shape now they can reach right up there but the architects and more importantly Global Foundries is where the real problem liesbji - Monday, May 5, 2014 - link
As a response to your comment, and to many of the comments above yours: the difference is that since Sandy Bridge, the speed at which Intel is moving forward on x86 performance has slowed, the first time in the history of x86 that this has happened. So the game is different now. Intel is moving ahead more slowly, and it will cost AMD less to catch up now.As Flunk below mentions, AMD actually already did catch up and surpass Intel once, when Intel stumbled. Now, in 2014 (and since 2011-ish, really), Intel hasn't stumbled so much as they've simply hit a performance improvement wall. Now that Intel isn't moving forward so fast, AMD can catch up.
cmdrdredd - Monday, May 5, 2014 - link
You think Intel has slowed progress on performance because they can't progress faster. Truth is they don't have to so they stall on purpose to milk consumers. I thought it was obvious myself.bji - Tuesday, May 6, 2014 - link
I think you're wrong. The desktop market is stagnating, on the verge of shrinking if it isn't already. x86 CPUs have been more than fast enough for casual uses for at least 6 years now. They've been more than fast enough for moderately stressful work for more than 3 years now. There are very few workloads, used by very few people, that can't be satisfactorily handled by a $200 CPU from three years ago. Finally, the lowest hanging fruit have already been picked over and over again in the quest for improving x86 performance. The node shrinks have just about hit an impossible, or at least very diffficult to breach, wall.All of this means that it is now prohibitively expensive to increase x86 performance substantially. Doing so demands more money than the market has to offer Intel for doing so. Sure, increases will happen, but only at a pace that can be achieved with lesser investment as demanded by lesser market interest in x86.
Lower power, on the other hand - that is an area where the advances are easier to make, at least for now, and consumer demands are high, given that more x86 CPUs are going into notebooks now than desktops.
Intel *can't* progress any faster, not while investing in R & D that is actually expected to have payoff. But AMD? AMD still has lower hanging fruit to pick. They can get down to the same process node as Intel, piggybacking off of demand from other segments of the industry that provide the dollars necessary to update fabs to tech competitive with Intel. They can exploit new ideas in CPU architecture (new to them, probably old to Intel) to bring their IPC up to par with Intel.
It will happen, not because AMD will magically become better at competing with Intel, but because Intel will hit the wall first (and is already hitting it form a performance increase perspective) and AMD will have the benefit of the trail already being blazed for them, and having to spend less money to get to that wall. It costs more to be the frontrunner than to follow the frontrunner. And so once the frontrunner can no longer run faster, the one trailing it will catch up.
Carleh - Tuesday, May 6, 2014 - link
And AMD is right to diversify its portfolio with ARM cpus, so it can tap both markets at once, minimizing risk.Gizmosis350k - Wednesday, May 21, 2014 - link
Moore's law no longer applies to CPU processorstestbug00 - Monday, May 5, 2014 - link
Eh, it is entirely possible. Since Sandy, Intel has been reaching for the highest reaching fruits to try to get improvement, they have got very little.No reason why AMD could not get its own version of Conroe... Which, was based off of parts of PentiumM and P4 (and some other stuff, ofc)
Unless Intel is willing to make a new architecture, ground up, there is not much performance gains to be seen from its Big cores. Now, power efficiency improvements, that is a whole different story :)
gruffi - Tuesday, May 6, 2014 - link
AMD outperforms Intel's low power Atom cores with their low power Cat cores. High-end cores aren't everything. 99.9% of the market don't need processors like i7-4770K. And I don't need to mention iGPUs. So, in 2 of 3 fields AMD isn't on par. They are ahead of Intel in terms of performance. Some people really should take a look beyond Intel's propaganda machine.Gizmosis350k - Wednesday, May 21, 2014 - link
Exactly, Intel isn't god, they just paid him off to sit in his throne for a day and he decided to humour themArnulf - Monday, May 5, 2014 - link
Working on similar projects like these might not mean double the required number of people - there might be synergy between the projects, at least with regards to some building blocks. besides, they are hedging their position.At some point ARM will hit desktop in larger numbers, allowing for even cheaper computers. A complete quad core 2+ GHz ARM system should be doable at lower cost than the price of a quad core Intel CPU, using less power (and, admittedly, offering lower performance ... but how much performance is really needed for majority of workloads?).
gostan - Monday, May 5, 2014 - link
you do know that synergy is a marketing buzzword right? right? right? right?mrdude - Monday, May 5, 2014 - link
No, he's right. The RISC vs. CISC wars have ended a long time ago, and vast portions of modern CPU architectures are actually incredibly similar. For example, AMD can utilize the same components like the IMC, loop buffers, and on-die GPU across what are "different" architectures.And we've seen this before with AMD and Intel both. AMD used the branch predictor from Jaguar/Bobcat in their Piledriver cores because it was more efficient, and we all remember how Intel transitioned and borrowed from their Pentium-M.
The worry of splitting resources is still there though, given AMD's significantly smaller R&D size. But that 'synergy' buzzword isn't just a buzzword. I'm far more worried about execution in a timely manner, as AMD has dropped the ball repeatedly there in the past few years.
testbug00 - Monday, May 5, 2014 - link
in the past few years (since BD release) the only large issue AMD had was with Kaveri launch.AMD is getting better and better at execution.
Not enough to stop worry, but, enough to where it is "AMD could screw up this execution" instead of "Well, we will see this coming in 2017 if we are lucky"
mrdude - Tuesday, May 6, 2014 - link
The A8-7600, imo the pick of the litter, has been pushed back to the tail end of the year. But remember that Kaveri was meant to launch last year but was replaced by Richland, which was just Trinity with a clock speed bump.Then there's Kabini/Temash, which started out in life as Wichita/Krishna, were then redesigned and ported to TSMC's 28nm, and then back to 28nm GloFo, and finally replaced by Beema and Mullins.
AMD has definitely had execution problems, and it's been this way for as long as I can remember. This isn't to say that other semicos don't, but AMD is already in a precarious position with end customers and OEMs alike skeptical regarding execution and timeline. Unfortunately, it's a reputation that's well deserved.
Gizmosis350k - Wednesday, May 21, 2014 - link
Exactly, but everyone feels that sitting back and trash talking the underdog gets them paid or somethingcoburn_c - Monday, May 5, 2014 - link
ARM will never be put in a desktop. ARM is optimized for low power usage, if you try to do desktop workloads with it you end up using more power than the equivalent x86 part. I'm pretty sure one of these three articles that should have really been one article states they have no intent on ARM desktop parts.mrdude - Monday, May 5, 2014 - link
What in the world does the ISA have to do with 'optimized for low power usage'? You're aware that there's no technical reason ARM can't match x86 or even surpass it, right? The reason ARM is focused on low power today is because historically that's where they've made their money -- embedded products tend to require low power and supreme efficiency. Currently ARM dominates the mobile market, again a market that requires low power and supreme efficiency. Nothing's stopping Qualcomm from designing a giant ARM core that chews through power but offers great performance at the top end. In fact, Apple is already sorta making that step ;PAnders CT - Monday, May 5, 2014 - link
@ coburnI am writing this on an ARM laptop with rather decent performance. Arm chips may be historically optimized for low power usage, but that is not an inherent trait of the instruction set.
gruffi - Tuesday, May 6, 2014 - link
ARM can also be optimized for high performance. AMD will develop their own ARM design to launch in 2016. So, there is definitely the opportunity to offer desktop systems based on ARM. It's not a question of hardware. It's a question of software support, especially the OS.Gizmosis350k - Wednesday, May 21, 2014 - link
Indeed, Windows 8.2 will offer out of the box support for ARM and everything that AMD is working on. The system will be able to handle workloads never thought possible with minimal overheadandrewaggb - Monday, May 5, 2014 - link
that's what I think as well. These might be nearly identical cpu's from an architecture point of view, one with an arm instruction set decoder and one with an x86 instruction set decoder, both of which will likely map to some internal instruction set the cpu actually understands. Seems pretty reasonable to me. Might even be able to use the same motherboards and everything.Flunk - Monday, May 5, 2014 - link
I expect you're right, the only difference will be on the front-end with the back-end being the same. All modern x86 CPUs are RISC on the back-end now.Zoomer - Monday, May 5, 2014 - link
A ARM decoder for a *dozer *driver architecture would be very, very interesting.And absolutely dominate in performance.
Gigaplex - Monday, May 5, 2014 - link
What makes you think that? The dozer architecture isn't that fast (at least compared to Intel)silverblue - Tuesday, May 6, 2014 - link
An FX Steamroller or Excavator might yield some surprises, should AMD ever release either. However, this may only be true in terms of parallelism and not clock speed as the process doesn't seem to support the latter.Gizmosis350k - Wednesday, May 21, 2014 - link
It's not like Intel cares about any of this, so fuck themAlexvrb - Tuesday, May 6, 2014 - link
Well until their new ground-up architecture is ready, they'd be better off with Puma+ backend.Alexey291 - Tuesday, May 6, 2014 - link
In all fairness (and I'm all for ARM being good competition to x86) but for majority of desktop workloads one DOES need more than what a 2ghz quadcore arm soc can offer atm. Especially if its a Krait core seeing how Quallcomm is kinda following the lower performance per clock strategy atm.eanazag - Monday, May 5, 2014 - link
I'll take this as good news. I'm glad to hear AMD is done working on its P4.Gizmosis350k - Wednesday, May 7, 2014 - link
I see what you did therefuck you
djc208 - Monday, May 5, 2014 - link
I don't think it's lack of competition, just that their competition has changed. Low power use for longer mobil life, or cheaper data-center power bills has made ARM popular for both mobil devices and the server farms that feed them, add in all the embedded systems and it's not just the average user who is thinking that a relatively slow ARM core is "good enough" for what they need.Intel's processors have sacrificed large performance gains in the fight to reduce power consumption. AMD has almost given up on the drive to being the most powerful in order to try and bring in that "right sized" APUs while also increasingly trying to push power useage down in a bid to match Intel with options in the mobil space.
Anders CT - Monday, May 5, 2014 - link
AMD has a shot at being relevant again. Thanks to mobile billions of dollars are being poured into independent foundries, which gives AMD better options than ever. Intel might still have a process-lead, but that lead matters less than it used to do.azazel1024 - Monday, May 5, 2014 - link
Intel hasn't hit much of a performance wall. They just have been choosing to funnel their improvements in to reducing power consumption, moving things on chip and improving GPU peformance.This has been at the expense of increasing CPU performance much, but look at performance per watt. Haswell is HUGELY beyond Sandy when you take that in to account.
That I think is going to be where catching up is going to be hard. Power consumption matters to a lot of people. In mobile it is obvious, but it also ties in to the server market too and even home market with things like NUCs and HTPCs (and power bills).
Intel just needs to decide to skip pouring process improvements in to reducing power consumption and drop it in to improving CPU performance for one tic-tock cycle and they'd probably have another 20-50% increase in performance over generation or two.
Haswell-E (or is it Broadwell-E) is already confirmed for Octocore processors. That right there will be a nice jump. I'd be a little suprised if at some point Intel doesn't drop hexacore in to at least their high level non-E line up (even if they are still i7 only). Might take Skylake, might be a generation after, but I'd be surpised if it doesn't show up sooner rather than later.
Really when it comes down to it, I want more CPU performance too, but I care a lot more about the power savings Intel has been dooling out, I just wish they could balance it across their platform with more performance targeted desktop architectures and more power oriented mobile architectures.
Gigaplex - Monday, May 5, 2014 - link
Haswell isn't that much better at performance per watt under load. The power efficiencies mostly come from power gating under idle or semi idle conditions.Hace - Monday, May 5, 2014 - link
Unless you're on a notebook, then the power gains are massive even under load.extide - Tuesday, May 6, 2014 - link
Maybe partial load...testbug00 - Monday, May 5, 2014 - link
If Intel was willing to stick max power consumption on its products, I might believe you (At the high end. On low usage scenarios, Intel has had huge improvements)However, Intel's solutions in mobile... Well, here (http://www.notebookcheck.net/Review-Asus-Zenbook-I... we have a 28W i7 (Iris Pro 5100) 8GB RAM, 1440p 13.3" screen. It used under 52.2W in stress testing... due to throttling.
The CPU in not capable of runnings its GPU at max clock + CPU at base clock while doing demanding tasks. They found 2.4Ghz + 900Mhz (note: CPU base == 2.8Ghz, GPU is technically 200 base, 1200 max)
testbug00 - Monday, May 5, 2014 - link
I would just like to note, Intel consistently breaks their TPDs on notebooks that are not from Apple (on the high end) which I suspect is due to Apple implementing much better throttling than its competitiors.purerice - Tuesday, May 6, 2014 - link
testbug00, good point but 2 problems.1) TDP is about "heat" not "power".
2) That was the whole system including memory, drive, bus, screen, etc, not just the CPU.
In my biased opinion laptops are not really suited for full-throttle operations for more than a few minutes at a time. They get too hot, too loud, lack proper ventilation, parts wear out sooner, and they are still slower than lower priced desktops.
Intel has done a great job of reducing power requirements on the low end over the past couple of years. It is as the high end where heat is less of an issue where Intel has slacked off. Maybe Intel is hoping AMD will catch up because if AMD were to go out of the CPU business, the trust busters would be after Intel in a heartbeat.
coburn_c - Tuesday, May 6, 2014 - link
They have no competition in the performance segment, and lowering power requirements assures that ARM can't creep into desktop. Once again I say, AMD killed the x86 market by over-leveraging to buy ATI and being unable to compete due to finances.gruffi - Tuesday, May 6, 2014 - link
I think Intel have hit a wall. Otherwise they wouldn't have done a crude 77W Ivy Bridge if a usual 95W version would have been more efficient. Intel's clocks are nearing the limit. Even new processes with technology as FinFET cannot change this. And IPC improvements are getting more and more difficult because the core is already highly optimized. Even going from a 6-wide to 8-wide backend couldn't help much.Penti - Monday, May 5, 2014 - link
This just refers to their "small cores", their semi-custom track and so on. Big cores will have a follow up too we can guess. Might look different though. Really expected for them to continue small cores any how.nofumble62 - Tuesday, May 6, 2014 - link
Where can AMD find enough engineering heads to design all of this. AMD has always been good on promise, short on delivery.gruffi - Tuesday, May 6, 2014 - link
AMD always delivered. Not like some other companies as Intel. Or where is my Larrabee? ;-)TiGr1982 - Tuesday, May 6, 2014 - link
Larrabee is in Xeon Phi, so Larrabee was delivered, but not for you, and, indeed, not in its original form promised.But I agree, the project to deliver x86 graphics device aka Larrabee had failed (because it was not a good idea from the very beginning), so, instead, the results were reused as a compute accelerator.
Gizmosis350k - Wednesday, June 25, 2014 - link
Yes iNTEL is nothing but a fadmilli - Tuesday, May 6, 2014 - link
I don't know if going big on ARM is the right choice for AMD. I understand that's where the market is going and AMD thinks it can earn some money there but ... at the same time it's giving up markets where it still can be competitive. Like the high end server market. Considering the transistor density of GF's 28nm SHP node, AMD could have released a monolithic 8 module Steamroller Opteron this year. Plus '32-core' high-end models. Having a high performance 32 core Opteron in 2014 is way better than having a possible hit in 2016.It feels like AMD is admitting defeat to Intel but does it really think it can take on Qualcomm, Broadcom, ... and even still Intel? Qualcomm is going to have it's custom 64-bit ARM core ready by that time too. Broadcom already has a big portfolio of low-power MIPS server SOCs. The market AMD is entering, is going to be even more saturated.
TiGr1982 - Tuesday, May 6, 2014 - link
It seems that AMD completely abandoned their "Big" multicore Opterons after Piledriver-based 6300 Series, as of now. It was probably a business decision taken on the very top of their management - presumably, they decided that it's not worth it to continue, thus surrendering big core server x86-64 market to Intel altogether, as of now.IMHO, it's indeed not a very good idea, because, anyway, their big double die 6100/6200/6300 Opterons run at lower frequencies, (~2.5-3.5 GHz), so that new 28 nm process used for Kaveri would be good in terms of power consumption for such a frequency range. And they already have new Steamroller core up and running on the market in Kaveri since January. Seems like while hesitating to reiterate the big Opterons to 28 nm & Steamroller (which seems to be clearly doable technically), they shot in their foot once again. Or maybe they just don't have enough manpower to do it simultaneously with other things. It's a pity.
Gizmosis350k - Wednesday, June 25, 2014 - link
I agree, but we'll have to wait and see - people love AMD for servers, let's wait and seephkahler - Tuesday, May 6, 2014 - link
Blind you all are. 2014-2 chips. 2015 one chip with one of 2 cores 2016-one chip supporting 2 instruction sets. Ambidextrous. Keller says ARM has more registers and less decode circuitry. So bolt on the x86 decoder and there you go. Of course that is a gross over simplification. But why the images, why "ambidextrous", and how else could little AMD do 2 high performance designs at once? Go ask them and see if they deny it.Gizmosis350k - Wednesday, May 7, 2014 - link
AMD has Jim Keller, who does Intel have leading the R&D team?Cpuhog - Monday, May 12, 2014 - link
I am assuming by that you think that is a positive thing for AMD? Why?Gizmosis350k - Wednesday, June 25, 2014 - link
Jim Keller designed the A series of chips for iPhone, this guy is legendary. The next generation of chips for AMD will be the best the market have seen thus farm and to answer phkahler AMD doesnt have to deny or confirm a thing - WHAT WE DO KNOW is that we're getting much, MUCH better chips in the next 3 years and there's nothing Intel can do about itxKrNMBoYx - Sunday, May 11, 2014 - link
Finally some good news for me. Hopefully the Excavator-core based APU/CPU will be a decent upgrade from Kaveri. If not making a FM2+ compatible "High Performance" 64-bit x86 CPU/APU would be all I ask for. I don't care if it's a CPU without a IGP or an APU with an IGP. I just want a stronger FM2+ compatible processor that is 6-8 cores. Xbox One and Playstation 4 uses custom AMD 8-Core processors. Newer games will probably utilize more than 4 cores in the near feature. I mean Watch Dogs requirements seem to show that. Unless AMD's IPC gets similar with Intel they need more cores for performance. FPU/Compute technology is great, but games (my usage) are CPU intensive in some cases, and AMD APUs/CPUs don't cut it at 4-cores. Show me a miracle AMD.2015-17 is too far away. This was the worst time to start building a desktop, and FM2+ was a bad choice. Going AMD was a bad choice for now as the future isn't clear.
Gizmosis350k - Wednesday, June 25, 2014 - link
Yes, I think AMD can do it we're going to be seeing high performance RAW compute across the board for all chips on all sockets :)Cpuhog - Monday, May 12, 2014 - link
AMD has been laying off their design team in Austin in large numbers so I am curious on how they plan on developing two new ISA designs. There is sure to be some re-use in the SoC but the most complex portion is the CPU. ARM and x86 ISAs are very different so it is not as easy as changing the decoder. There are very different instructions for load/store, security so I can't see much re-use in the CPU. Maybe the adders, multipliers and high level micro-architecture widgets but it is two separate efforts. With a reduced team. Jim Keller is no superman - far from it in my opinion so given their execution track record, they are setting themselves up for a huge challenge here.Gizmosis350k - Wednesday, June 25, 2014 - link
Jim Keller is the greatest don't talk down the guy. When AMD scales up his designs to the PC space great things will happen, Don't come in here talking trash just because you typed on a keyboard plugged into a box that says iNtel Inside. Be level headed and see that AMD is putting out the real trash, folks like you, and getting people who've been in the business have KNOWLEDGE and EXPERIENCE about engineering the NEXT BIG THING in x86 and ARM.