"General purpose operating systems such as Linux (Android), Windows, OSX, and iOS require an MMU to function. That means M series processors, like all microcontrollers (MCUs), will never be tasked with running general purpose operating systems."
This is incorrect: Linux runs on the Cortex M and other platforms that lack an MMU via uClinux. There are some differences, yes, but for the most part it's transparent to the developer, and most software runs unmodified.
Like the Cortex-M3 & M4, it is a 32-bit ARMv7-M core processor. I is said to use a six-stage superscalar pipeline.
The ARM press release says that it will have highly flexible system and memory interfaces. Looking forward to seeing more details on that... (though of course it will lack a MMU).
It launches manufactured on a 40nm process and runs up to 400Mhz. However it will move to a 28nm process in the near future, where performance is expected to double (so one can assume a near doubling of clock speed as well).
Atmel is already said to have a license. Will be interesting to see if the Arduino folks pick this processor up for a new Arduino board. Arduino has picked up the Cortex-0+ for the new Arduino ZERO and uses the Cortex-M3 for the Arduino DUE. But they've yet to use the Cortex-M4. Their latest board, the Arduino TRE uses a Texas Instruments Sitari chip which is a Cortex A series processor. So who knows what direction they are moving in.
Texas Instruments licenses the Cortex-M processors, but I haven't heard of a license for this new processor. I just picked up a nice development board that uses the Cortex-M4 from TI.
Speaking from someone who's used MCUs and the M3/M4 series micro-controllers in development (professionally and hobby), I don't really see a point in using a general purpose OS on a small footprint. Something like this should be doing one thing and doing it very well.
There are plenty of RTOSes out there that take up a very tiny footprint that will do memory management and "thread management" for you if you need it. Heck all of my projects have gotten away with doing amazing things without needing a heap (i.e., calling malloc).
The biggest thing I don't like about uCLinux is it requires external RAM. A lot of development boards don't have them.
Im curious as to which RTOS, micro controller or any other 'hobby or professional' use case that rivals this M7? It's footprint, efficiency, capabilities and phenomenal 'future' opportunities this motion sensor will provide. You're using vacuum tubes. As are all wearables to date. Those utiliIng this new ARM architecture will be moving directly to SolidState. Overnight. It's THAT HUGE!
Best to 're-think' your tools and options, as using a micro-controller in your new project slated for mid 2015 release will be chewed up and spat to the floor by anyone using a 'currently available' M7. Difference between a typewriter and computer with Word Prodessor, with today's capabilities!
I can't apologize enough for my profound ignorance and stupidity, but what exactly were the 2 similar IoT anouncements on Anandtech in the last 2 days as referenced in the article? I can only find the Mediatek announcement. In the future, links to references would help stupid people like me.
For as often as my phone, computer, stove, and microwave all get out of sync, you'd think we humans haven't yet mastered the whole "tracking of time" thing. Most of my devices are supposed to auto-update online, but that doesn't excuse the poor time-keeping of modern devices in between updates. Smart watch? Doubtful.
Good point. 2014 and where are standards for this across all devices? The only devices that are truly accurate subscribe to a network time protocol (NTP) service. How about we recycle amplitude modulated carriers toward a narrow band digital signal broadcast NTP service. Then every device out there can have a simple AM loop antenna in it to "listen" to the NTP service and update. Of course this would require the FCC and IEEE to actually work together. Sigh. . .
There are already at least two ways to get a reference timestamp over the air. First is GPS, which will get you accurate timestamps into the sub-millisecond domain, which is why you see GPS devices in data centers usually as a stratum0 NTP device and/or a PTP Master. The second is WWVB operated by the NIST, which broadcasts reference timestamps at the 60KHz band. If you have an alarm clocks that sets itself this is probably the source it uses.
So for my own understanding of roughly how powerful these are, what x86 processor would you say they're most comparable to? From the general architecture they look like a first-gen P5 Pentium, minus the MMU (and optionally minus the cache and FPU). Would that be an accurate analogy?
It's hard to directly compare MCUs to application processors. They have different types of tasks. Application processors are about computational throughput and you pay for that in enormous gate counts (=area=cost) and energy consumption. Eg the quad core Intel Xeon I'm using right now has a gate count in excess of 2G gates. By comparison a small Cortex M0 will have as few as 12K gates and can run tasks such as wireless security sensors for years on end from a single coin cell battery. Also MCUs facilitate deterministic code execution where timing of events is critical (eg detect car crashing -> deploy airbag). For benchmarks look a this table: http://en.wikipedia.org/wiki/Instructions_per_seco... Seems a ARM Cortex M0 at 50MHz is somewhere between an Intel 486DX and the original Pentium.
Yeah, it's never going to be an exact comparison. I pulled the P5 guess from the architecture - both P5 and M7 are in-order, two-issue superscalar with two main integer paths and an FPU. And while their goals were very different, they both were very constrained for transistors (by current standards), so I figured they might have some level of comparison.
the nearest comparison to an x86 processor is the new intel edison. however where this will excel at certain tasks the edison will excel at others.
to break it down any task that is memory bandwidth constrained will be better on edison. any task that is latency critical and served best by an interrupt (the car crash airbag analogy above serves well) is better performed on the m7.
any task that is math based, requires simple calculations, and parallel execution or reordering on the fly of the task being performed is done frequently then the m7 would be a good choice with the inclusion of the dsp. an example of this is manipulation of incoming audio or video data in response to user input (examples like a guitar pedal for audio or video mixing effects like on a club's screens).
the simple fact is there's no perfect "one solution". what one does better the other does worse and no processor is best in all catagories
edison isnt so much a processor as it is a SoM or system on module. It contains an atom silvermont applications processor (AP) and an Intel Quark microcontroller (MCU).
hmmm.. they are a risc based architecture (armv7 ISA), so they wouldn't have as high as an IPC as a cisc based architecture like pentium. however this cortex-m7 mcu is more a soc than just a straight processor like the pentium was. hard to say as the workloads are so absolutely different.
will be exciting to watch some of the initial development & prototyping boards introduced by some of the big players (ti, stmicroelectronics and atmel) to see what native capabilities they break out. hopefully ethernet, sdio, jtag, and multiple channels of uart, can, spi, i2c/twi, dac are givens. would love a basic lcd interface too.
No, by definition RISCs have better IPC than CISCs, not the other way around (on a RISC pretty much every instruction executes in a single cycle, unlike the complex CISC instructions). Studies have shown x86 and ARM actually do the same amount of work per instruction due to x86 compilers avoiding the complex instructions (this observation is why RISC exists!), and ARM doing a lot of work per instruction (compared to other RISCs - such as allowing shift+add as a single instruction, conditional execution, load/store multiple). This effectively means any IPC difference is not due to ISA but due to microarchitecture.
Looking at the Dhrystone results, the M7 is a bit faster than A7, so should obliterate Quark, beat an old Pentium and do well against Silverthorne at similar frequency.
ARM specifically advised against comparing performance numbers across architectures, saying it was an apples and oranges comparison.
Despite similar numbers in these very synthetic benchmarks, when running actual application code , the M7 will never compete with the A series. The numbers are only useful comparing within the family.
Sure an M7 will never run Android, but that's not the point. v7-M and v7-A share the same Thumb-2 ISA and use the same compiler backend, so it's not apples/oranges. You can run the same binary on an M7 and A7 if you wish.
Due to its shorter pipeline, an M7 should beat an equivalently clocked A7. Of course an A7 can clock 2 times as high in 28nm so it wins in absolute performance. However that still means M7 is a huge leap from an M3/M4.
The chart says 2.14-3.23 dmips/mhz, or 5.04 coremark/mhz, so atom d525 or core i5-2400 for coremark, or between intel pentium/pentium pro and pentium 3.
For those guys who understand german here is the best review so far including the first implementation of the Cortex-M7 in a real MCU, a so called STM32F7 Family delivered by ST Microelectronics: http://www.elektroniknet.de/halbleiter/mikrocontro... Google translator may help....
The three big guys in the MCU world (Atmel, Freescale and ST) are already working in Cortex M7 processors and I'm sure they already have a internal implementation of the MCUs: http://www.arm.com/about/newsroom/arm-supercharges...
Freescale has it already in it's roadmap, and it's called Kinetis X.
Microchip is also a 'big guy' but is conspicuously absent from the ARM party. Which is a pity: I like their stuff, but I've now standardized on the ARM Cortex-M family of MCU.
So we expect to see this as the core of an SoC in wearables and for high end wearables, ARM recommends a gimped A7. Well what about the R series that falls right in between?
I'm still very fuzzy on the relative comparisons between the wearable processors. Maybe do an article comparing Aster platform (ARM7ESJ), Cortex-M4/M7, Cortex-Rx, Cortex-A7/A53
So what are the target market of the M series and R series? R series is higher performance? Both lack MMU right? The distinction between the two lines are kinda blurry to me.
Embedded (M) was traditionally a micro controller using on-chip flash and SRAM, no MMU, no DSP, no FP support. The R series are higher performance realtime CPUs with TCM, caches, branch prediction and often external DRAM and FP. Now that M also supports DSP, FP, caches, and is becoming high performance, things have become blurred. The ISA differences are now the main distinction, M only supports Thumb-1, Thumb-2 and uses a different interrupt model, while the R architecture is basically A series plus TCM minus MMU. So many TLA's...
Oh that's right, different interrupt model. M series is generally lower latency because it's directly coupled, if I recall. It's just that this new M7 line blurs that line even further than the M4 did... For TCM, is it generally DRAM integrated on the MCU, or a tight interface between the MCU and the DRAM chips? Didn't one of samsung's SSDs use a few Cortex-R3 cores for their controller?
Simply put, TCM is fast on-core instruction/data SRAM, similar to an I- or D-cache. It is fully under user control and thus without the non-deterministic effects of a traditional cache. TCM can be used in addition to a cache. TCM allows high frequencies like a cache, and thus is faster than an external SRAM.
The usage model is that you put all your critical realtime code/data in the instruction/data TCMs and run the rest from flash/DRAM. When an interrupt occurs, you start executing realtime code from the TCM immediately rather than having to wait for cache misses that inevitably occur if you didn't have TCM. So the TCMs are actually necessary for realtime on a fast CPU, having a low interrupt latency alone is not the whole story.
Oooh I see. It sounds like TCM is a big distinguishing feature between the M and R series then. So even if performance is equal, R series actually allows for applications with even tighter latency requirements than the M series. Well, learned something new today, thanks!
Most of the M series support an optional MPU for OS task protection. That said, security and MMU are 2 orthogonal things - an MMU doesn't stop exploits as otherwise we wouldn't have any viruses/trojans/rootkits/etc on PCs. For microcontrollers security is easier as there are far fewer possible security breaches, so it's more down to not setting default passwords or using old, already broken encryption algorithms.
IMHO only M3 or M4 - anything else is way overkill for eg. a watch. You definitely don't want to run anything as big/complex as Linux/Android if you want to provide at least a week of battery life.
who can tell me: how many clock cycles will be needed for ten taps 32-bit FIR filter output sample computation ? 1 cycle MAC instruction is O.K. but what about data transfer ?
It does 2 16-bit MACs per cycle and the graph shows 2x32 interfaces to DTCM, so that should mean it can sustain 2 MACs plus 64-bit load/store per cycle.
Even on a mid-wearable config, a 500Mhz Soc seems really overkill considering the original iPhone had a 400Mhz Soc and could do all the phone functions except taking speech (this could be done using a small custom DSP on the Soc). For a wearable, it cannot be expected to play video but maybe capture 3 minute video segments at a time. It seems the industry is pushing hardware as an overkill rate just to spur up a new segment of the market. This has led to compromises in battery life due to too much transistor counts and too high frequency chips being used. There was a day when a Pentium 266Mhz was a fast computer. Just our perception of numbers makes us think, it is slow as molluscs today. It ain't that slow. Running Linux, it does nicely.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
43 Comments
Back to Article
Guspaz - Tuesday, September 23, 2014 - link
"General purpose operating systems such as Linux (Android), Windows, OSX, and iOS require an MMU to function. That means M series processors, like all microcontrollers (MCUs), will never be tasked with running general purpose operating systems."This is incorrect: Linux runs on the Cortex M and other platforms that lack an MMU via uClinux. There are some differences, yes, but for the most part it's transparent to the developer, and most software runs unmodified.
extide - Tuesday, September 23, 2014 - link
This is true, but there are some pretty big limitations when using uClinux. I wouldn't suggest it unless you really really need it, heh.HardwareDufus - Tuesday, September 23, 2014 - link
Like the Cortex-M3 & M4, it is a 32-bit ARMv7-M core processor. I is said to use a six-stage superscalar pipeline.The ARM press release says that it will have highly flexible system and memory interfaces. Looking forward to seeing more details on that... (though of course it will lack a MMU).
It launches manufactured on a 40nm process and runs up to 400Mhz.
However it will move to a 28nm process in the near future, where performance is expected to double (so one can assume a near doubling of clock speed as well).
Atmel is already said to have a license. Will be interesting to see if the Arduino folks pick this processor up for a new Arduino board. Arduino has picked up the Cortex-0+ for the new Arduino ZERO and uses the Cortex-M3 for the Arduino DUE. But they've yet to use the Cortex-M4. Their latest board, the Arduino TRE uses a Texas Instruments Sitari chip which is a Cortex A series processor. So who knows what direction they are moving in.
Texas Instruments licenses the Cortex-M processors, but I haven't heard of a license for this new processor. I just picked up a nice development board that uses the Cortex-M4 from TI.
HardwareDufus - Tuesday, September 23, 2014 - link
Yes, I over used the phrase 'picked up' and cannot edit it. Feel free to substitute chose, selected, employed, used, etc...xenol - Wednesday, September 24, 2014 - link
Speaking from someone who's used MCUs and the M3/M4 series micro-controllers in development (professionally and hobby), I don't really see a point in using a general purpose OS on a small footprint. Something like this should be doing one thing and doing it very well.There are plenty of RTOSes out there that take up a very tiny footprint that will do memory management and "thread management" for you if you need it. Heck all of my projects have gotten away with doing amazing things without needing a heap (i.e., calling malloc).
The biggest thing I don't like about uCLinux is it requires external RAM. A lot of development boards don't have them.
akdj - Sunday, September 28, 2014 - link
Im curious as to which RTOS, micro controller or any other 'hobby or professional' use case that rivals this M7? It's footprint, efficiency, capabilities and phenomenal 'future' opportunities this motion sensor will provide. You're using vacuum tubes. As are all wearables to date. Those utiliIng this new ARM architecture will be moving directly to SolidState. Overnight. It's THAT HUGE!Best to 're-think' your tools and options, as using a micro-controller in your new project slated for mid 2015 release will be chewed up and spat to the floor by anyone using a 'currently available' M7. Difference between a typewriter and computer with Word Prodessor, with today's capabilities!
J
isa - Tuesday, September 23, 2014 - link
I can't apologize enough for my profound ignorance and stupidity, but what exactly were the 2 similar IoT anouncements on Anandtech in the last 2 days as referenced in the article? I can only find the Mediatek announcement. In the future, links to references would help stupid people like me.extide - Tuesday, September 23, 2014 - link
Mediatek, and this oneloftie - Tuesday, September 23, 2014 - link
If the embargo on the slides are correct, you published an hour early.Stephen Barrett - Tuesday, September 23, 2014 - link
Yeah the CMS failed me on daylight savings. Hopefully we will be forgiven ;-)nathanddrews - Wednesday, September 24, 2014 - link
For as often as my phone, computer, stove, and microwave all get out of sync, you'd think we humans haven't yet mastered the whole "tracking of time" thing. Most of my devices are supposed to auto-update online, but that doesn't excuse the poor time-keeping of modern devices in between updates. Smart watch? Doubtful.markmuehlbauer - Wednesday, September 24, 2014 - link
Good point. 2014 and where are standards for this across all devices? The only devices that are truly accurate subscribe to a network time protocol (NTP) service.How about we recycle amplitude modulated carriers toward a narrow band digital signal broadcast NTP service. Then every device out there can have a simple AM loop antenna in it to "listen" to the NTP service and update. Of course this would require the FCC and IEEE to actually work together. Sigh. . .
otherwise - Wednesday, September 24, 2014 - link
There are already at least two ways to get a reference timestamp over the air. First is GPS, which will get you accurate timestamps into the sub-millisecond domain, which is why you see GPS devices in data centers usually as a stratum0 NTP device and/or a PTP Master. The second is WWVB operated by the NIST, which broadcasts reference timestamps at the 60KHz band. If you have an alarm clocks that sets itself this is probably the source it uses.makerofthegames - Tuesday, September 23, 2014 - link
So for my own understanding of roughly how powerful these are, what x86 processor would you say they're most comparable to? From the general architecture they look like a first-gen P5 Pentium, minus the MMU (and optionally minus the cache and FPU). Would that be an accurate analogy?jdesbonnet - Tuesday, September 23, 2014 - link
It's hard to directly compare MCUs to application processors. They have different types of tasks. Application processors are about computational throughput and you pay for that in enormous gate counts (=area=cost) and energy consumption. Eg the quad core Intel Xeon I'm using right now has a gate count in excess of 2G gates. By comparison a small Cortex M0 will have as few as 12K gates and can run tasks such as wireless security sensors for years on end from a single coin cell battery. Also MCUs facilitate deterministic code execution where timing of events is critical (eg detect car crashing -> deploy airbag). For benchmarks look a this table: http://en.wikipedia.org/wiki/Instructions_per_seco... Seems a ARM Cortex M0 at 50MHz is somewhere between an Intel 486DX and the original Pentium.makerofthegames - Tuesday, September 23, 2014 - link
Yeah, it's never going to be an exact comparison. I pulled the P5 guess from the architecture - both P5 and M7 are in-order, two-issue superscalar with two main integer paths and an FPU. And while their goals were very different, they both were very constrained for transistors (by current standards), so I figured they might have some level of comparison.wetwareinterface - Wednesday, September 24, 2014 - link
the nearest comparison to an x86 processor is the new intel edison. however where this will excel at certain tasks the edison will excel at others.to break it down any task that is memory bandwidth constrained will be better on edison.
any task that is latency critical and served best by an interrupt (the car crash airbag analogy above serves well) is better performed on the m7.
any task that is math based, requires simple calculations, and parallel execution or reordering on the fly of the task being performed is done frequently then the m7 would be a good choice with the inclusion of the dsp. an example of this is manipulation of incoming audio or video data in response to user input (examples like a guitar pedal for audio or video mixing effects like on a club's screens).
the simple fact is there's no perfect "one solution".
what one does better the other does worse and no processor is best in all catagories
Stephen Barrett - Wednesday, September 24, 2014 - link
edison isnt so much a processor as it is a SoM or system on module. It contains an atom silvermont applications processor (AP) and an Intel Quark microcontroller (MCU).HardwareDufus - Tuesday, September 23, 2014 - link
hmmm.. they are a risc based architecture (armv7 ISA), so they wouldn't have as high as an IPC as a cisc based architecture like pentium. however this cortex-m7 mcu is more a soc than just a straight processor like the pentium was. hard to say as the workloads are so absolutely different.BTW, with the embargo lifted.... ARM just updated their website:
http://arm.com/products/processors/cortex-m/cortex...
http://arm.com/Cortex-M7-chip-diagramLG.png
will be exciting to watch some of the initial development & prototyping boards introduced by some of the big players (ti, stmicroelectronics and atmel) to see what native capabilities they break out.
hopefully ethernet, sdio, jtag, and multiple channels of uart, can, spi, i2c/twi, dac are givens. would love a basic lcd interface too.
Wilco1 - Tuesday, September 23, 2014 - link
No, by definition RISCs have better IPC than CISCs, not the other way around (on a RISC pretty much every instruction executes in a single cycle, unlike the complex CISC instructions). Studies have shown x86 and ARM actually do the same amount of work per instruction due to x86 compilers avoiding the complex instructions (this observation is why RISC exists!), and ARM doing a lot of work per instruction (compared to other RISCs - such as allowing shift+add as a single instruction, conditional execution, load/store multiple). This effectively means any IPC difference is not due to ISA but due to microarchitecture.Looking at the Dhrystone results, the M7 is a bit faster than A7, so should obliterate Quark, beat an old Pentium and do well against Silverthorne at similar frequency.
Stephen Barrett - Tuesday, September 23, 2014 - link
ARM specifically advised against comparing performance numbers across architectures, saying it was an apples and oranges comparison.Despite similar numbers in these very synthetic benchmarks, when running actual application code , the M7 will never compete with the A series. The numbers are only useful comparing within the family.
Wilco1 - Wednesday, September 24, 2014 - link
Sure an M7 will never run Android, but that's not the point. v7-M and v7-A share the same Thumb-2 ISA and use the same compiler backend, so it's not apples/oranges. You can run the same binary on an M7 and A7 if you wish.Due to its shorter pipeline, an M7 should beat an equivalently clocked A7. Of course an A7 can clock 2 times as high in 28nm so it wins in absolute performance. However that still means M7 is a huge leap from an M3/M4.
tuxRoller - Wednesday, September 24, 2014 - link
The chart says 2.14-3.23 dmips/mhz, or 5.04 coremark/mhz, so atom d525 or core i5-2400 for coremark, or between intel pentium/pentium pro and pentium 3.http://www.eembc.org/coremark/index.php
https://en.wikipedia.org/wiki/Instructions_per_sec...
Flunk - Thursday, September 25, 2014 - link
Intel Quark.KlausWalter - Wednesday, September 24, 2014 - link
For those guys who understand german here is the best review so far including the first implementation of the Cortex-M7 in a real MCU, a so called STM32F7 Family delivered by ST Microelectronics: http://www.elektroniknet.de/halbleiter/mikrocontro... Google translator may help....uningenieromas - Wednesday, September 24, 2014 - link
The three big guys in the MCU world (Atmel, Freescale and ST) are already working in Cortex M7 processors and I'm sure they already have a internal implementation of the MCUs:http://www.arm.com/about/newsroom/arm-supercharges...
Freescale has it already in it's roadmap, and it's called Kinetis X.
jdesbonnet - Wednesday, September 24, 2014 - link
Microchip is also a 'big guy' but is conspicuously absent from the ARM party. Which is a pity: I like their stuff, but I've now standardized on the ARM Cortex-M family of MCU.ah06 - Wednesday, September 24, 2014 - link
So we expect to see this as the core of an SoC in wearables and for high end wearables, ARM recommends a gimped A7. Well what about the R series that falls right in between?I'm still very fuzzy on the relative comparisons between the wearable processors. Maybe do an article comparing Aster platform (ARM7ESJ), Cortex-M4/M7, Cortex-Rx, Cortex-A7/A53
Thanks
FunBunny2 - Wednesday, September 24, 2014 - link
-- the M series processors are considered microcontrollers and not application processors, mainly because they lack a memory management unit (MMU).Well, then, the X86 wasn't a real cpu, because the MMU was off chip for rather a while. Not until the 386 was it really implemented.
hammer256 - Wednesday, September 24, 2014 - link
So what are the target market of the M series and R series? R series is higher performance? Both lack MMU right? The distinction between the two lines are kinda blurry to me.Wilco1 - Wednesday, September 24, 2014 - link
Embedded (M) was traditionally a micro controller using on-chip flash and SRAM, no MMU, no DSP, no FP support. The R series are higher performance realtime CPUs with TCM, caches, branch prediction and often external DRAM and FP. Now that M also supports DSP, FP, caches, and is becoming high performance, things have become blurred. The ISA differences are now the main distinction, M only supports Thumb-1, Thumb-2 and uses a different interrupt model, while the R architecture is basically A series plus TCM minus MMU. So many TLA's...hammer256 - Wednesday, September 24, 2014 - link
Oh that's right, different interrupt model. M series is generally lower latency because it's directly coupled, if I recall. It's just that this new M7 line blurs that line even further than the M4 did...For TCM, is it generally DRAM integrated on the MCU, or a tight interface between the MCU and the DRAM chips?
Didn't one of samsung's SSDs use a few Cortex-R3 cores for their controller?
Wilco1 - Wednesday, September 24, 2014 - link
3 R4 cores are used in Samsung SSDs.Simply put, TCM is fast on-core instruction/data SRAM, similar to an I- or D-cache. It is fully under user control and thus without the non-deterministic effects of a traditional cache. TCM can be used in addition to a cache. TCM allows high frequencies like a cache, and thus is faster than an external SRAM.
The usage model is that you put all your critical realtime code/data in the instruction/data TCMs and run the rest from flash/DRAM. When an interrupt occurs, you start executing realtime code from the TCM immediately rather than having to wait for cache misses that inevitably occur if you didn't have TCM. So the TCMs are actually necessary for realtime on a fast CPU, having a low interrupt latency alone is not the whole story.
hammer256 - Wednesday, September 24, 2014 - link
Oooh I see. It sounds like TCM is a big distinguishing feature between the M and R series then. So even if performance is equal, R series actually allows for applications with even tighter latency requirements than the M series.Well, learned something new today, thanks!
toyotabedzrock - Wednesday, September 24, 2014 - link
It is not a good idea to put this in a wearable or a car. The lack of an MMU seems tone deaf given the security environment we live in.Wilco1 - Wednesday, September 24, 2014 - link
Most of the M series support an optional MPU for OS task protection. That said, security and MMU are 2 orthogonal things - an MMU doesn't stop exploits as otherwise we wouldn't have any viruses/trojans/rootkits/etc on PCs. For microcontrollers security is easier as there are far fewer possible security breaches, so it's more down to not setting default passwords or using old, already broken encryption algorithms.ah06 - Thursday, September 25, 2014 - link
Which one makes most sense in a wearable? M4, M7, Rx, A7, A53?Wilco1 - Thursday, September 25, 2014 - link
IMHO only M3 or M4 - anything else is way overkill for eg. a watch. You definitely don't want to run anything as big/complex as Linux/Android if you want to provide at least a week of battery life.DIYEyal - Sunday, September 28, 2014 - link
Actually the WeLoop tommy smartwatch has the M0, they claim 3 weeks of battery life with a 110mAh battery.RomanR - Thursday, September 25, 2014 - link
Hi,who can tell me: how many clock cycles will be needed for ten taps 32-bit FIR filter output sample computation ?
1 cycle MAC instruction is O.K. but what about data transfer ?
Wilco1 - Thursday, September 25, 2014 - link
It does 2 16-bit MACs per cycle and the graph shows 2x32 interfaces to DTCM, so that should mean it can sustain 2 MACs plus 64-bit load/store per cycle.fteoath64 - Saturday, September 27, 2014 - link
Even on a mid-wearable config, a 500Mhz Soc seems really overkill considering the original iPhone had a 400Mhz Soc and could do all the phone functions except taking speech (this could be done using a small custom DSP on the Soc). For a wearable, it cannot be expected to play video but maybe capture 3 minute video segments at a time. It seems the industry is pushing hardware as an overkill rate just to spur up a new segment of the market. This has led to compromises in battery life due to too much transistor counts and too high frequency chips being used. There was a day when a Pentium 266Mhz was a fast computer. Just our perception of numbers makes us think, it is slow as molluscs today. It ain't that slow. Running Linux, it does nicely.jinish - Tuesday, November 17, 2015 - link
Hi!The post is more than a year old now.. Were you guys able to fetch the area of cortex M7, till date?