Comments Locked

100 Comments

Back to Article

  • flgt - Wednesday, June 23, 2021 - link

    Yeah, wait until Cyberdyne T-800 neural net CPU comes out. You'll be blown away.
  • Leeea - Wednesday, June 23, 2021 - link

    Cyberdyne is just misunderstood. It is simply implementing tried and true conservation techniques to preserve the ecosystem for the long run. Not only to preserve the planet by reversing global warming, but to preserve diversity for a far more resilient and healthy world.

    Cyberdyne only has the earths best interests in mind.
  • Oxford Guy - Thursday, June 24, 2021 - link

    Appreciation of nature is based on one of two things or both. The first is the belief that stable/equilibrium ecosystems are necessary for human survival and/or good or better life quality in a manner unrelated to aesthetics. The second is aesthetic concerns.

    It is not yet known to what degree AI will be able to match (and, ideally, improve upon) the latter type of appreciation. The first, necessity for survival/good+ life quality (irrespective of aesthetics) is not one that machines likely will feel much pressure over.

    Put simply, unless AI were to develop better aesthetic appreciation of nature than humans have demonstrated, nature conservation is unlikely to be a particularly significant priority for AI.

    The level of irrationality of aesthetic concerns (the artistic experience) is an interesting issue. One hopes that AI will improve beyond the very low standard of rationality demonstrated by humanity (i.e. in its self-governance) yet wonders how aesthetics fit. If AI rule is the best hope humanity has for protection from human pettiness (ecological cannibalism as I call it), will the artistic experience be erased or ‘shortchanged’ in the process?
  • mode_13h - Friday, June 25, 2021 - link

    > unless AI were to develop better aesthetic appreciation of nature than humans have
    > demonstrated, nature conservation is unlikely to be a particularly significant priority for AI.

    Nature is a treasure trove of novel solutions to various problems. Some, even in the quantum domain. Even in an AI weren't in any way dependent on the ecosystem services provided by nature, it could deem value in preserving nature as a curiosity and a potential fertile ground for research.
  • mode_13h - Friday, June 25, 2021 - link

    > Even in an AI ...

    Typo: Even if an AI ...
  • tuxRoller - Monday, July 5, 2021 - link

    Nature might be the least efficient iterative mechanism that could exist.
    Any self-respecting ai overlord would cut down nature and replace it with a few fleshy compute nodes.
  • mode_13h - Wednesday, July 7, 2021 - link

    > Nature might be the least efficient iterative mechanism that could exist.

    Genetic algorithms are extremely powerful.

    Any by focusing on efficiency, you're only looking forward at the ability to solve new problems. You're ignoring all of the wisdom that already exists in nature. I think only humans are foolish enough to flush that down the toilet for some narrow, self-interested goal.
  • mode_13h - Friday, June 25, 2021 - link

    Plus, why are we even discussing this here? This isn't fundamentally an article about AI. Maybe save these sorts of discussions for articles touching on general-purpose machine intelligence, guys. Or, at least the sort of hardware that enables it
  • Oxford Guy - Friday, June 25, 2021 - link

    It’s not about AI despite having AI in the title. It’s not about AI despite the implication that AI is being brought in because it can outthink humans (for this particular job). Perhaps you mistook the second word in the title for ‘censorship’.

    As for your point about nature being worth keeping intact so we can mine its secrets (e.g compounds to be used to make new drugs) — that is a good point but also can fit into the first of the two categories I outlined. Mining nature (with nature being in equilibrium so as to be able to preserve the organisms with as yet unmined ‘tech’ in them) both requires equilibrium (a lot of pristine ecosystems with a lot of area for them) and is about keeping human life quality high. It seems much more uncertain what an AI advanced enough to not need humans (or free humans outside of some sort of AI maintenance servitude role within the ‘soy-industrial complex’) will need flora and fauna for information mining. Perhaps if cyborgs are the future rather than pure robots things like new drugs would be on the table.
  • mode_13h - Saturday, June 26, 2021 - link

    > It seems much more uncertain what an AI advanced enough to not need humans
    > will need flora and fauna for information mining.

    There's a lot of material science in nature. As I mentioned, there are numerous organisms which harness various quantum effects to facilitate some biological function.

    Also, nature is needed to sustain global temperature balance. As computers run more efficiently when it's cool, I think AI might prefer to halt or reverse global warming, and in a way that doesn't block sunlight from reaching the surface.
  • GeoffreyA - Saturday, June 26, 2021 - link

    mode_13h, they'll mechanise the use and preservation of nature in a fashion that'll stun us.
  • Oxford Guy - Monday, June 28, 2021 - link

    How much of that requires the diversity of a highly-complex nature/originating system?
  • anonymoused - Wednesday, July 7, 2021 - link

    -Us: Ask AI to not lose game
    -AI: Pauses game
    -Us: Ask AI to reverse global warming
    -AI: .....

    What do you think the AI would do there...
  • mode_13h - Thursday, July 8, 2021 - link

    > What do you think the AI would do there...

    Sure, because the most rational approach is going to be building an omnipotent and unpredictable AI, as the first step.
  • GeoffreyA - Saturday, June 26, 2021 - link

    Oxford Guy, concerning aesthetics, I would say there are subtle rules and principles that have been discovered, perspective in paintings being one example. In short, an automaton could "appreciate" some "work" that squares with those rules beyond a certain threshold. CheckAesthetics(Tree) == GOOD.

    Concerning preservation of nature. Nature equals resources. Machine needs energy to run. Machine will be solicitous to preserve/use nature in a mechanised way like clockwork. It won't be pretty but it'll be quite an efficient nature reserve.
  • Makaveli - Wednesday, June 23, 2021 - link

    "You'll be blown away"

    lol Literally
  • Spunjji - Friday, June 25, 2021 - link

    *shakes chain-link fence with increasing levels of agitation*
  • GeoffreyA - Saturday, June 26, 2021 - link

    We'll just have to send Arnie and Sarah Connor in there to do some damage.
  • prophet001 - Wednesday, June 23, 2021 - link

    Yay marketing

    -____-
  • michael2k - Wednesday, June 23, 2021 - link

    Ads pay the bills.
  • Ian Cutress - Wednesday, June 23, 2021 - link

    This isn't an ad. My own words about the state of the industry, and context from a free-flowing discussion with Synopsys' CEO focusing on the areas I thought relevant to the piece.
  • Spunjji - Friday, June 25, 2021 - link

    Didn't strike me as being anything like one. I appreciated the topic being covered.
  • mode_13h - Saturday, June 26, 2021 - link

    +1
  • watersb - Saturday, June 26, 2021 - link

    Good grief... Ian, this is an *amazing* exclusive, and it saves me the grovelling necessary to get journal articles. Just wow.
  • mode_13h - Saturday, June 26, 2021 - link

    It's a preview of upcoming keynote presentations. That was right in the first couple paragraphs. Obviously, it's not going to give away all the goods on those. In spite of that, it still had quite some intriguing tidbits.

    If you felt you didn't get your money's worth, maybe you should ask for a refund?
  • Ryan Smith - Wednesday, June 23, 2021 - link

    To be sure, this is not an ad. I bend over backwards to have anything that's an ad properly labeled as a Sponsored Post, as I don't want there being any doubt over what's an ad versus what's editorial content.
  • Everett F Sargent - Thursday, June 24, 2021 - link

    It might as well be an ad, or a bad joke, as either works here ...

    "All the while, a number of critics are forecasting that Moore’s Law, a 1960s observation around the exponential development of complex computing that has held true for 50 years, is reaching its end."

    Ba ha ha ha ha ha ... we can now call that one, Moore's Law exponential fallout a hindcast ...
    https://www.top500.org/statistics/perfdevel/

    Extraordinary claims require extraordinary evidence ... If it is concave down on a log-normal plot, then it is certifiably less then exponential!

    Not to be pedantic, but nothing in the known universe continues to grow exponentially forever, thus those predicting the end of such behaviors were right in the 1st place.
  • mode_13h - Thursday, June 24, 2021 - link

    > https://www.top500.org/statistics/perfdevel/

    Top500 is different than single core or single CPU. And it also matters what sort of workload we're talking about. I wouldn't necessarily assume all of those graphs have the same shape.

    > nothing in the known universe continues to grow exponentially forever,
    > thus those predicting the end of such behaviors were right in the 1st place.

    Yes, that's obvious. The hard stuff is predicting when it breaks down, how rapidly, which limits are reached first, etc. In the recent Jim Keller interview, he made some comment about voltage flux limits being reached that I gather is significant but I didn't fully grasp why.
  • back2future - Friday, June 25, 2021 - link

    it also depends on definition of Moore's Law (transistors on area?, logic elements per volume on 3D?, electrical and photon-induced switching level?) and Ian's pleading proposed getting creative for to keep pace (until atomic borders and financial limitations for down-scaling will definitely call for different new concepts and systems and compounds/matrices of materials)
    AI is a tool for solving problems by providing variations of a solution with high probability for being suitable (on changing conditions in real-time), but AI is far from being creative or flexible enough, if presumptions have to be changed because of paradigm changes or system updates?
    Getting towards the limits makes systems more susceptible to errors what changes systems into new variations with adjusted and more stable components.
    Moore's Law is about 'logical switches' density and less about information (data?) density per area?
    Where's the benefit of 'artificial neural networks' on this, that's intesting?
  • back2future - Friday, June 25, 2021 - link

    for identifying two states of electrical charge (or isolated areas of electrical charges being connected temporarily through logical switches) "lowest possible" difference is elementary charge and voltage (flux) is driving force for changing electrical state within time available. On higher logic's clock rates, time for voltage change approaches to practical limitations (~5-9GHz on production processors, ~<100GHz graphene, ~2021), voltage is reduced (because of smaller nodes) so (dynamic) voltage change per time is less capable force with shrinking nodes?
    What's the answer of "AI" to this (optimization of timing dependent voltage levels)?
  • Everett F Sargent - Thursday, June 24, 2021 - link

    After Moore’s Law: How Will We Know How Much Faster Computers Can Go?
    https://www.datacenterknowledge.com/supercomputers...

    "In the middle of the last decade, the laws of physics finally had their say, denying transistors the ability to be shrunken any more than they already were. Betrayed by what one songwriter called “the flash of a neon light that split the night,” industries and economies everywhere began dumping the Law like an electorate dumping a losing candidate."

    Read the whole thing even.
  • mode_13h - Thursday, June 24, 2021 - link

    I think you're missing the point of the article. Don't get so hung up on the Moore's Law part that you miss the part about achieving real improvements that appear to be somewhat comparable to an additional node-shrink. That'd be a pretty big win, no?
  • Everett F Sargent - Thursday, June 24, 2021 - link

    You mean the part where AI or DL is not able to think outside the box? By definition, AI or DL is fundamentally limited by its own data driven boxes. Sure, you can extrapolate, but you can't extrapolate in all dimensions, thus you will never know the most efficient pathway, just A singular pathway.
  • mode_13h - Friday, June 25, 2021 - link

    Circuit layout is a highly-constrained problem. The AI is working in the same box as human designers or traditional automated layout tools. All the while, humans are defining the shape of the box, so to speak.

    It's a little hard to discuss by analogy or too much abstraction, but suffice to say I don's share your concern.
  • Everett F Sargent - Thursday, June 24, 2021 - link

    AFTER MOORE'S LAW
    https://www.economist.com/technology-quarterly/201...

    "After a glorious 50 years, Moore’s law—which states that computer power doubles every two years at the same cost—is running out of steam. Tim Cross asks what might replace it"

    Circa 2016 even.
  • mode_13h - Friday, June 25, 2021 - link

    > Circa 2016 even.

    What's even your point? People have been foretelling the end of Moore's Law since shortly after it was published! Jim Keller cited one of the more humorous takes on this phenomenon. I'm paraphrasing, since I can't seem to find the quote:

    "Every 18 months, the number of people foretelling the end of Moore's Law doubles!"

    Jim said that, in a matter of just a few weeks, his team at Intel identified room for a potential 50x increase in transistor density (presumably vs. their current 14 nm process, since that was in 2019). You can hear more of his thoughts on the subject, in this talk:

    https://www.youtube.com/watch?v=oIG9ztQw2Gc
  • Calin - Thursday, June 24, 2021 - link

    If this is an ad, it sells you what?
    I don't think you're in the target market for creating chip designs.
    I appreciate it. Thank you Anandtech team :)
  • mode_13h - Wednesday, June 23, 2021 - link

    I thought it was worth a read. As I hadn't heard about Google's experiment, this was all news to me.
  • jlp2097 - Wednesday, June 23, 2021 - link

    Agree that this is really ad-dy but then it is also very interesting and there is some added value from Ian. The AI enhancements will likely not be as easy or achieve as much as claimed, but even 1/3 of the claimed benefits would still be extremely valuable.

    Side note @Ian/@Ryan: the recent sponsored post by ARM was probably the first sponsored post that was actually interesting and relevant for anandtech readers. It was also the first time (at least for me) that comments were seemingly disabled. Was that on purpose/requested by ARM?
  • Ryan Smith - Wednesday, June 23, 2021 - link

    "It was also the first time (at least for me) that comments were seemingly disabled. Was that on purpose/requested by ARM?"

    Advertisers can request to have comments disabled. That is a company (Future) policy.
  • Everett F Sargent - Thursday, June 24, 2021 - link

    You mean you all are following the well worn TweakTown practice? Those dunderheads from down under. Then you will need to start posting astronomy articles too.

    After that you will hopefully follow the Ars Technica model. At least that is what I would very seriously suggest.
  • Oxford Guy - Friday, June 25, 2021 - link

    Oh yes... Ars... an echo chamber even Slashdot can be proud of.

    Ars is a great model for the brokenness of the type of comment model it uses. Of course, if echo chambers containing banality and excluding critical thought are the goal...

    A Disqus employee wrote that what a forum needs to improve is more censorship (according to the employee that’s not censorship at all, though — quite the opposite). The more censorship there is, the better the quality of the discourse. So, Ars is hardly unique. Its ‘moderation’ staff is cavalier and unprofessional in handling criticism of the voting/hiding nonsense (argumentum ad populum in action), as there is no interest in doing anything outside the plan — the plan being top-down paternalism disguised as a community of relatively intellectual thoughtfulness.

    Do you also favor Canon’s new smart office tech that requires people to ‘smile genuinely’ in order to do any task, like use a copier? The company claims its AI is smart enough to outsmart tricksy humans trying to use false smiles. Very much along the Ars line of thought, where appearances under duress are the point.

    Marge Simpson also passed on the virtue of the pasted-on smile to Lisa. I suppose Canon should pay royalties to Marge’s mother.
  • mode_13h - Friday, June 25, 2021 - link

    > The company claims its AI is smart enough to outsmart tricksy humans trying to use false smiles.

    All this accomplishes is training humans to get better at faking smiles.
  • back2future - Friday, June 25, 2021 - link

    Security should also have interceded at Canon's "smart" office tech (even if there are exceptions, because of 'Equal Opportunities' laws)?
  • name99 - Wednesday, June 23, 2021 - link

    Come on, don't be that guy!

    Honestly, I thought this was more interesting than most of Ian's interviews. The Jim Keller one was good, but the AMD and Intel ones tend to be a complete waste of time, clearly managed and saying nothing that wasn't already widely known.
  • mode_13h - Wednesday, June 23, 2021 - link

    > All the major foundries work with these two EDA vendors, and
    > it is actively encouraged to stay within these toolchains, rather than to spin your own

    Doesn't Intel traditionally use its own in-house tools? IIRC, as part of their foundry effort, they're adding support for standard tools.
  • Ian Cutress - Wednesday, June 23, 2021 - link

    Intel's a big exception, although Keller said part of his role was to introduce standardization on that front.
  • name99 - Wednesday, June 23, 2021 - link

    Do we know the situation with Apple?
    I seem to remember part of what they picked from PA Semi was a suite of "design tools" but that could mean anything -- eg a very good set of simulators for designing a micro-architecture, but not relevant to the actual implementation of that architecture. Of course Intrinsity was lower level, so maybe they provided custom versions of this level of tool?
  • mode_13h - Wednesday, June 23, 2021 - link

    Very promising results! That seems like possibly another process node-equivalent worth of improvement, and I'm sure there's a lot of room for further gains (such as the point Ian made about tweaking the library cells, for instance, or maybe even foregoing them).
  • GC2:CS - Wednesday, June 23, 2021 - link

    So people are too dump to design better chips at such high comlexities.

    So we will make AI powered by best human designed Ai chips design better chips.
    Well terminator scenario seems quite bad for humans. But what if the AI will start designing bad chips and we will have no way to check them up ?
  • mode_13h - Wednesday, June 23, 2021 - link

    > So people are too dump to design better chips at such high comlexities.

    Humans are very general and flexible, but AI can be fine-tuned to out-perform humans at just about any sufficiently well-defined task.

    > But what if the AI will start designing bad chips and we will have no way to check them up ?

    The AI used to design these chips cannot even work without highly-accurate simulations, since it's part of the feedback used to help the AI learn. However, using AI to push ever closer to the envelope could mean that the simulations and possibly even the silicon-level testing needs to be improved, but different AI could probably also help with that.
  • Phemg - Wednesday, June 23, 2021 - link

    It would still be a better design than what we can make
  • GeoffreyA - Saturday, June 26, 2021 - link

    Well, perhaps even the universe and life have been designed by some AI. See Asimov's "Last Question."
  • mode_13h - Sunday, June 27, 2021 - link

    > perhaps even the universe and life have been designed by some AI.

    At that point, would you still call it artificial?
  • GeoffreyA - Sunday, June 27, 2021 - link

    From our point of view, no, it wouldn't be artificial at all. But, to some higher observer, different story. I suppose it all comes down to what we compare against. Truth is, I don't subscribe to the belief in my original comment. It's an interesting thought though.
  • twotwotwo - Wednesday, June 23, 2021 - link

    Tricky thing is a neural network doesn't need to think like a human to win at this optimization problem. Guessing and check a zillion options is a reasonable strategy. It does have to improve on the brute force or hand-coded heuristics they guess-and-check with already (the lighter dots in that scatterplot) by enough to justify the cost of the neural net evaluations. But it can still spew out, say, 99.9% bad ideas and end up a net win.

    Seems like one case of a general way people confuse themselves about "AI"--thinking engineers have to capture the mysterious process inside humans' minds, when they just have to find *some* trick (w/lots of data, lots of attempts, etc.) to barrel towards a solution a bit faster than before.
  • mode_13h - Wednesday, June 23, 2021 - link

    I think you're misinterpreting the plot. I'm pretty sure the light blue dots are showing the output of the various training runs. Notice how they seem to converge towards the optimal result.

    It's a fair question whether the AI would need to repeat this entire process for each block, or if the training is simply specific to the cell library. Perhaps some fine-tuning would be needed for it to learn the best way to layout each different sort of functional element.

    Whatever the case, I think you're off the mark by calling it "guess-and-check". AI is more than a simple stochastic method.
  • Ian Cutress - Wednesday, June 23, 2021 - link

    mode_13h is correct. Each blue point on that plot is a result of one floorplan output by the algorithm and then further iterated on. You're not picking a point and creating a design to fit that point, it's creating a design and the point is the output.
  • bob27 - Thursday, June 24, 2021 - link

    Has anyone tested whether humans can iterate improvements on the best AI output?
  • mode_13h - Thursday, June 24, 2021 - link

    I'm venturing dangerously out of my depth, but it seems to me that layout is sort of a global optimization problem in 2+ dimensions, where it's hard to make isolated changes. It'd probably be easier to have a human apply extra constraints and hints on the input side, than have the human modify the machine-generated output to any meaningful extent.
  • David Champelovier - Wednesday, June 23, 2021 - link

    ”the EDA tools market has two main competitors”

    Two, really?
  • hechacker1 - Wednesday, June 23, 2021 - link

    So invest in GOOGL, SOXX, SNPS, and who else?

    I for one bow down to our silicon architect overlords.
  • dicobalt - Wednesday, June 23, 2021 - link

    Lots of A, barely any I. IMO AI is more of a marketing gimmick at the moment than it is a product. So when I see people using those letters I roll my eyes and tell them to kiss my a. I still hope someone makes actual progress, but I don't think it's going to happen. The burden of proof is upon those using those letters too much.
  • mode_13h - Thursday, June 24, 2021 - link

    Sure, AI is a broad category and can encompass quite a lot of different methods and levels of "intelligence". Instead of getting hung up on the term, you can read Google's publication (see article for link) to gain a better understanding of the precise methods they're using. While that's not necessarily what Synopsys is doing, it can give you a better sense of what they're talking about.

    > I still hope someone makes actual progress, but I don't think it's going to happen.

    "The proof of the pudding is in the eating." If their new tools enable faster clocks, lower power, and/or cheaper chips to be built in less time and by fewer engineers, do you *really* care what's under the hood? Chip layout tools have long used a collection of algorithms, to further these goals. You can simply think of these as a new generation of tools & techniques for doing the same sorts of things they always have.
  • adelio - Thursday, June 24, 2021 - link

    All this talk of "big Data" and giving customers a better experience.
    To me this is just an excuse to grab more and more of the information about ME and use it in ways that I DO NOT WANT. I would prefer NO adverts at all. Certainly no tailored adverts. Every time i go on to Ebay i end up with e-mails about what i have been looking at. NO MORE.... I hate it....
    Companies continually pushing me to look/buy stuff just because i have looked at it! Really.
  • mode_13h - Thursday, June 24, 2021 - link

    What does that have to do with the article? I guarantee Synopsis' tools aren't mining your ebay activity.

    You can find browser settings and plugins, as well as other techniques that can help you protect your privacy and anonymity, but that's a topic for a different thread (or site). Once you sign into ebay, they're *going* to be tracking your activity on their site, but perhaps you can at least change your email preferences so they don't spam you so much.
  • Oxford Guy - Friday, June 25, 2021 - link

    More egregious than the 6,000 ads the average American is bombarded with in a comparatively short span of time, an ad count that continues to rapidly rise) is Canon’s abusive AI that requires ‘genuine smiles’ to get things like copiers to work.

    (Grin in the wrong manner and the toilet may not flush. Regardless, your supervisors will get data on how good your bathroom grin was.)

    Canon’s new ‘innovation’ is no outlier. It is the nature of products sold to people who crave power in the face of a cowed overly-passive citizenry bewildered by the many barriers (including geographical) of understanding and participation.
  • Oxford Guy - Friday, June 25, 2021 - link

    Microsoft is now forcing people to have ‘accounts’ with its already spyware-ridden OS (running insecure networks and software on insecure hardware). The progression away from enduser agency continues. As some have observed, one no longer buys a computer and owns it; one is purchased, via computing devices (stuck together with the ‘cloud’) by the corporate/government complex. Computer use you.

    What results for the corporation when it pursues this increasingly aggressively? A 2 trillion valuation.
  • mode_13h - Friday, June 25, 2021 - link

    > Microsoft is now forcing people to have ‘accounts’ with its already spyware-ridden OS

    Yeah, that would be a great point to make in the comments of one of the many new articles about MS' valuation.
  • mode_13h - Friday, June 25, 2021 - link

    Um, you're not helping this thread stay on topic. I don't know why you think this is a good place to air your beef with Canon, but I think it's no more relevant to the topic than ebay's user tracking & spamming policies.
  • Oxford Guy - Friday, June 25, 2021 - link

    Asking a poster how their post is relevant is much lower signal to noise than the original post. Heal thyself, physician. You’re not a mod here.
  • mode_13h - Saturday, June 26, 2021 - link

    I know I'm not a mod, but I think there's serious discussion to be had about this subject, yet the AI free-for-all is diluting that. As for reminders to stay on-topic hurting the SNR, I think it's worth a small post to nudge someone pulling the discussion onto multiple different tangents.
  • Oxford Guy - Monday, June 28, 2021 - link

    And still he/she keeps at it.
  • mode_13h - Tuesday, June 29, 2021 - link

    > he/she

    "it" is fine. The AIs will appreciate us getting in that habit, already.

    Of course, there's no reason you couldn't have addressed me directly.
  • thecoolnessrune - Thursday, June 24, 2021 - link

    Good read Ian! Thanks! I see the use of these new tools over time really helping fine tune and deliver better results than could be achieved on silicon designs in the past due to time and financial constraints.
  • six_tymes - Thursday, June 24, 2021 - link

    so, it's all about saving money and time? more jobs being catabolized by technology, and this is considered advancement? 50 years from now humans will have next to zero intelligence, we are fairly close to zero already given the current state of society allowing people to shoot, stab, murder and loot businesses without impediment. first a law called "catch and release" and now they are thinking prisons are the problem, and we should build less... the level of stupidity is rising, and our elected officials are now the examples.
  • Oxford Guy - Friday, June 25, 2021 - link

    ‘Job’ is a euphemism for a form of enslavement in which the enslaved has somewhat more latitude when it comes to the terms of the labor conducted — unless the person doing it would choose to do it fully voluntarily — in roughly the identical manner (i.e. having to report to bosses, evaluations, commutes, cubicles, mess hall office tables, et cetera). At that point the closest word in English is ‘hobby’. One would do such work with no coercion/duress, such as having to save for a kids’ college fund or one of the plethora of other types of duress that keep people in ‘jobs’.

    English lacks a word for routine long-hours work one does after receiving one’s education that is primarily about self-fulfillment. Other than ‘research’ — which implies simultaneous employment in academia, and ‘hobby’ — which implies random scheduling and a lighter workload, the closest is ‘retirement’ and that implies minimal work. ‘Labour of love’ is hardly on the same level of social seriousness, as compared with there being a single word for the concept of truly voluntary work. Charity/philanthropy are also, outside of a special context like being a nun, seen as cake icing (cynically as marketing for the system that places people in the position of having so many resources to reallocate from their net worth).
  • mode_13h - Friday, June 25, 2021 - link

    If it enables better chips to be built faster and for less money, it's hard to argue against it, though. It's not only about eliminating jobs.

    You do have a point that we're eventually going to have to come to terms with the implications of ever-increasing automation, but there aren't exactly millions of people doing chip layout. So, this could be bad if you're one of them, but otherwise won't move the needle.

    On the flip side, we don't know how many more chips could be built, if the layout process were better automated. It could be that more chips get built with this semi-custom layout technology, enabling many or most of the current layout engineers to stay in the field (presuming they get up-to-speed on the new technology).
  • IHFTP - Thursday, June 24, 2021 - link

    I created an account just to make this comment.

    There is a field called MDO which has been essential for every complex design you see around you. From aircraft to computer chips. The solution space is always too big to compute and clever ways to reduce it to find optima that satisfy constraints have been tried for a while. Look up simple algorithms like GENETIC OR particle swarm or simulated annealing.

    Posing the question/ constraining the algorithm is much harder that
  • IHFTP - Thursday, June 24, 2021 - link

    Accidental submit.

    Posing the question/ constraining the algorithm is much harder than it looks.

    That’s where the innovation will be in design.

    Allowing the AI enough freedom to find optima that it hasn’t found before.

    abstracting at the right level / removing detail that just slow the iteration speed without adding value.

    Is it AI in the neural network sense? Idk 🤷🏽‍♂️

    It is however a type of solution search that makes a lot of sense with increasing complexity.
  • mode_13h - Friday, June 25, 2021 - link

    > Is it AI in the neural network sense?

    Yes, we can be reasonably sure of that. Google published more details about their approach, if you're curious how deep learning might be used for something like this.
  • allenkim96 - Thursday, June 24, 2021 - link

    I'm a chip designer, but my specialty is the front-end of the design (RTL, simulation), not the back-end (layout, timing) which is where Google's AI project is centered around.

    I like the concept, but I'm skeptical of its potential. Back-end engineers already rely on extensive automation to perform the layouts of CPU design blocks, and I don't think the AI will be able to "reinvent the wheel," if you catch my drift.

    It's like trying to use AI to find me a quicker way home from the office. No matter how many side streets and shortcuts the AI suggests to me, my drive home is not going to deviate much from the default path.
  • mode_13h - Friday, June 25, 2021 - link

    > It's like trying to use AI to find me a quicker way home from the office.

    I somehow doubt that. Such a route-finding problem is fairly easily solved by conventional methods. It's a simpler problem.
  • Oxford Guy - Thursday, June 24, 2021 - link

    ‘The desire to have the most profitable solution is accelerating the development of’

    Fixed it for you.

    The notion that corporations, and humanity in general strives for the best solutions in the objective sense is destroyed by the ‘practicality’ of profit-seeking. Ecosystems are destroyed and degraded. Quality of life is decimated for many. People literally speak glibly about the rise of robotic pollinators to replace ‘obsolete’ insects in a world too saturated by the ‘pesticides’ of the various types. The regulation speaks to the truth I’m telling, such as the ‘active’ ingredient in one fungicide formulation that’s 27,000 times less toxic to a species of bee than the ‘inert’ ingredient added by the formulator. People love to dismiss criticism of the system as being ‘conspiracy theory’ but conspiracy to gain profit at others’ expense is the driver of business and politics — not altruism.

    You use the word ‘best’ a lot in the article, including in a paragraph that makes it somewhat clear that it means profitable for a particular company. What is profitable for a particular company is certainly not necessarily what is best in the objective sense —when the true cost/benefit ratio is taken into full account.
  • Oxford Guy - Friday, June 25, 2021 - link

    A simple example is the grocery corporation Kroger which frequently completely changes where products are located in the store — to make the shopping experience more difficult.

    Such innovations in increasing entropy (degrading human life quality) can be found throughout the business world and in human conduct in general.

    Planned obsolescence in the tech realm has led to quite egregious attacks on efficiency, such as a 2011 or 2012 MacBook Pro that can’t be used because Apple holds the operating system for ransom. Go to replace my friend’s perfectly-acceptable Mini (upgraded RAM and added SSD) and find that the newly-introduced machines ship with 8 GB of unexpandable RAM! This from the company that introduced the ‘first Mac’ to the tech press, not with the actual unexpandable first model — rather with one that had four times the RAM (surreptitiously, of course) so it could wow everyone with speech synthesis that the actual machine about to go into the market was incapable of running due to lack of RAM and inclusion of that 3rd-party software.

    None of these examples are so unusual. Apple itself was sold faulty floppy drives because the company selling them wanted to thwart Apple from making the software-based floppy controller viable in the market.

    Pollyannas will screech ‘conspiracy theories!’ at the tops of their lungs but anyone who pays attention to the machinations can see the grift upon which the system operates.
  • mode_13h - Friday, June 25, 2021 - link

    > Pollyannas will screech ‘conspiracy theories!’ at the tops of their lungs

    No, just 'off topic!'

    Good points to raise in the next article about a new MacBook Pro, however.
  • Oxford Guy - Friday, June 25, 2021 - link

    More than one person posted that the article felt like an ad to them. Were you to spend more energy comprehending the relevance of others’ posts, you might avoid wasting space with superficial analysis and advice.

    ‘Best’ in the objective sense is very often far different from ‘best’ in the sense of what is going to make a small subset of the population wealthier (typically at others’ expense). The bits of improvement to our lives provided by the game of profit-seeking via tech development don’t negate the drawbacks, both in terms of things like environmental degradation and in terms of ‘Big Brother’/panopticon oppression innovation.

    When you can’t comprehend relevance you can choose to ask nicely for clarification or you can post about something else. You are not entitled to pretend you have the knowledge nor position to play censor in a manner that is anything beyond droll.
  • mode_13h - Saturday, June 26, 2021 - link

    > More than one person posted that the article felt like an ad to them.

    Maybe they were just being overly-cynical or they work for a competitor. We don't know if they even read the whole thing, because they didn't bother to explain why it seemed ad-like or make any constructive suggestions.

    > you might avoid wasting space with superficial analysis and advice.

    If you think I'm being overly superficial (in ways relevant to the article), please do me the favor of indicating where & making your case. I'm sure it would be edifying for me and perhaps others.

    > ‘Best’ in the objective sense is very often far different from ‘best’ in
    > the sense of what is going to make a small subset of the population wealthier

    How to distribute the benefits of technological advancements are macroeconomic and societal problems that can only be addressed through the political process. As such, I think it's not relevant to the article. For sure, it's a legit topic of discussion, but I really don't understand why you're trying to have it here (unless what you really want is *not* a discussion, but just to spam us with your views).

    > You are not entitled to pretend you have the knowledge ...

    Dissent noted. Using big words doesn't make you any less of a hole.
  • Oxford Guy - Monday, June 28, 2021 - link

    You start your post with ‘maybe this, maybe that’ and end it with as hom. Enough said.
  • mode_13h - Friday, June 25, 2021 - link

    Please chill out, guy. I think you should really find somewhere more appropriate to discuss some of these subjects.

    If the software can build smaller/faster/lower-power chips in less time and for less money than current methods, then it can indeed be a greener solution. What determines whether it will be is whether people just upgrade their disposable devices more frequently, etc. All things outside the scope of chip design -- things that broader society needs to come to terms with.

    Again: outside the scope of chip design!
  • back2future - Friday, June 25, 2021 - link

    maybe not absolutely. If an optimization AI calculates an improvement on layout level for a chip design and being able, within a company (society) framework environment, to guess the impact of new and improved hardware on marketing result and usage habits, recycling efforts/energy consumption, then this AI might evolve towards supporting "one button push solutions" for product development and helpful predictive analysis of its parameters on society's interests?
  • Oxford Guy - Friday, June 25, 2021 - link

    Your Jr. Mod posts are far less relevant.
  • mode_13h - Saturday, June 26, 2021 - link

    The only criticisms of yours I care about are those relevant to the subject matter of the article. We all know you're smart. Please use your powers for good.
  • ipkh - Saturday, June 26, 2021 - link

    I'm curious if this can improve an already hand optimized design. It's all well and good to see AI start off with a blank slate, buy I'm more interested in seeing it piggyback off if existing good designs. I really dislike this whole all AI or nothing approach that seems do common these days as any design of an AI intrinsically includes biases.
  • mode_13h - Sunday, June 27, 2021 - link

    > I'm curious if this can improve an already hand optimized design.

    If you see the figures they give about the time taken by human experts, it's seems a little hard to justify. The bigger issue is likely to be that the tool probably isn't designed to work that way (i.e. ingesting an existing layout and making local optimizations to it). That could be an interesting experiment, but since few or no customers are going to want to spend the money on a human-made layout that the machine can already surpass, I doubt Synopsis is going to invest in that capability.

    > any design of an AI intrinsically includes biases.

    Depending on how it's trained, they might instead be called "preferences" and "limitations" -- like those any human designer would have. There are different ways to do training -- learning by example (where bias can creep in through the selection of training samples) is only one. In some cases, like this one, AI can simply learn through feedback given to it by a simulator.

    A simulator could tell it how good or bad different parts of the design are, in terms of things like timing, signal integrity, cross-talk, impedance, area utilization, etc. That feedback is enough to help it "figure out" how to make a good design. What's really interesting is that the weights given to those different criteria can be adjusted to reflect a customer's priorities, as alluded to by the article.
  • back2future - Sunday, June 27, 2021 - link

    being part (interactively) of this process/methods (progress on experience) would be interesting these days, like
    "What's your AI's (on chosen training model and inference/approximation methods) text extraction (parameters: summary, relation to modern development, people's interests, scarcity/rareness, local/www resources ...) out of this author's article (because not so many will be involved into chip layout designing process)?"

Log in

Don't have an account? Sign up now