Technology Show 2020: Development of Computer Engineering

One day is a dream and the second day is a dream. It will fly your plane. It recognizes your voice and facial expressions. It tells you what to read, watch and listen to. Technology is developing so fast that it is hard to remember that this is not always the case. And not always like this: technology is expected to cure our disease, correct our thinking, and fundamentally change the way the world works.

With touch points in electrical engineering and computer science, computer engineering is at the forefront of every major technological advancement of the past 100 years. From the Internet to personal computers to smartphones, it has revolutionized our way of life. But the risk is amazing. Right, computer engineering gives Americans less computing power than today's dishwashers. Wrong, this led to the unfortunate fall of two extremely sophisticated commercial airliners into the ground after taking off.

Today, the rewards of success can lead to the rapid development of radically effective new drugs and countless other beneficial advances. At the same time, however, errors in today's computer engineering can be so serious that they pose a threat to human survival. This is not science fiction, but the future of computer engineering. And it comes faster than you think.



New Swordsmanship: Big Data and Algorithms for Social Data

In the early 21st century, Oakland A, the low-budget team of the Major League Baseball, managed to establish a roster that could compete with stronger competitors. They did it with data. This is achieved by what is now called Sabermetrics, which is based on the methodology of Oakland A based on fuzzy data points at the time. The team's managers no longer focus on past traditional metrics (for example, the player's pitching speed or the basic percentage of theft), but instead delve into the newly recorded statistics and discover what they think is really important. The result is not only a better team in Oakland A, but also a revolution in baseball franchise.

Big data gives computer engineers more information than ever before to help them make decisions. With almost unlimited data points, you know what to ask. But this doesn't just apply to games and businesses. This is also for the benefit of society. Innovators are using algorithmic sorting and Sabermetrics to address inequality issues, improve recruitment practices, and prevent the spread of misinformation.

Rediet Abebe, a Ph.D. candidate in computer science at Cornell University, pioneered a new algorithmic ranking approach designed to bridge the gap in resources provided to vulnerable groups. As an intern at Microsoft, she developed an AI project that aims to identify unmet health needs in Africa by scanning people's search queries. Abebe designed her own algorithm to identify which people have easy access to information about HIV stigma, HIV discrimination and natural HIV treatment. As a result, she released a portion of the population who needed help but did not get help. Her project has expanded to all 54 African countries and only collected network data to identify the people most likely to need support. Now, she brings the project to the United States and works with the National Institutes of Health advisory committee to resolve health differences in the United States.

In another example, two computer science and engineering undergraduate students at the University of Nebraska, Vy Doan and Eric Le, unleashed the power of the algorithm in the fight against the wrong message. Doan and Le realized that humans are prone to confirmation bias, so a machine learning algorithm was developed to identify suspicious news on their own. By letting the system dump years of Twitter posts, Doan and Le were able to identify the data points involved in finding the error message: location, account age, and post frequency. Once the detection system is established, they begin to program the browser extension to warn the user about the source of unreliable information.

Big data is no longer new. As technology matures, the problem facing computer engineers is not how much data they have, but what data they need to pay and what they achieve.

Brain machine interface: troubles

As technology continues to advance, computer engineers are exploring more and more ways to physically connect humans to technology. The Brain Machine Interface (BCI) achieves this goal in an amazingly literal sense. By connecting the brain to today's hardware, BCI has the potential to push human evolution to the next stage.

It is estimated that more than a quarter of Americans suffer from brain disease. It sounds like a very bad joke, but it's not the case: these diseases are characterized by post-traumatic stress, limited mobility, and memory problems like Alzheimer's. In 2013, President Obama announced plans to conduct brain research through the advancement of innovative neurotechnology (BRAIN), which aims to study how the brain works as a technology and how technology interacts optimally.

So far, the results have been fruitful. Preclinical studies show how brain cells combine to handle mood. Non-invasive ultrasound technology allows drugs to be released into specific areas of the brain. An adaptive electrical stimulation device can be used to treat dyskinesia. This is a revelation of one way we treat brain problems: not only the use of diffused chemicals, but also electrical connectivity.

The public sector is not the only area in the field to develop: Elon Musk has his own BCI company, Neuralink, a 100-employee startup that is developing data transfer between people and computers. system. It was founded in 2017 and has only recently announced some of its progress: recording rat brain activity through thousands of electrodes implanted along neurons and synapses. Musk also hinted at the successful placement of BCI in primates, which allows animals to control computers with their minds. The next step is to conduct a clinical human trial in 2020, which will seek FDA approval. The purpose of these trials is to insert those electrodes into the patient's body and give them abdominal muscles.

Neuromorphic calculation: building the brain

One way to avoid BCI regulatory barriers is to abandon human subjects and simply build a whole new brain. Neuromorphic calculations are designed to design machines that mimic the human brain's capabilities in hardware and software, and the technology will gain widespread attention in 2020.

Starting from the inconspicuousness of the 1980s, neuromorphic computing took a big step forward in 2017, when Intel introduced the Lohi Neuromorphic Processor, a self-learning chip that adapts to feedback from the observed environment. To imitate brain function. The Lohi chip is extremely energy efficient, uses the recorded data to derive inferences and becomes more intelligent over time. And its function is also very powerful: neuromorphic hardware excels in areas previously thought to be human-dominated, such as kinesthetics (prosthetics) and visual recognition (pattern classification).

In 2019, Intel integrated 64 Lohi chips into a large neuromorphic system called Pohoiki Beach. This turns the hardware equivalent to 130,000 neuron analogs to 8,000,000. Expressed in terms that are easier to understand: the nerve capacity of a single Lohi chip is half that of a fruit fly, while the nerve volume of the Pohoiki Beach system is zebrafish. The most impressive part of it is not the current state, but the direction of progress. The Lohi chip consumes 100 times less power than a graphics processing unit (GPU) and is five times lower than dedicated IoT inference hardware, which means Intel can scale to about 50 times its current capacity and still maintain better performance than competing products. .

Next year, Intel is committed to launching a larger neuromorphic system, nicknamed Pohoiki Springs. Competitors such as Samsung and IBM have begun to focus on and develop their own projects. The pseudo-brains of neuromorphism will take a long time to match or exceed the size and capabilities of the engineers who created them. But during the transition period, you should see more efficient and more user-friendly computing power.

Quantum Computing: The Last Frontier

Quantum computing exists, but it is not. Between the two. This obvious paradox is the driving force behind one of the most exciting possibilities of computer engineering.

In the case where the conventional calculation consists of bits encoding zero and one, quantum computing replaces those bits with qubits that exist in a superimposed state, which means that they can act as zeros and ones at the same time. If you can expand the scale of quantum computing, it can quickly solve the problem that traditional computing technology can take years or even centuries to solve. What followed was a paradigm shift in the financial, medical, and IT fields. These applications are virtually limitless, and for beginners, the results may look amazing.

In March 2018, Google's Quantum AI Lab introduced its 72-bit processor called Bristlecone. This is a crucial step in the supremacy of the Mai vector: the moment when quantum computers begin to transcend traditional supercomputers. But this is the quantum world. Nothing is linear, not just processing power. Quantum computers are prone to errors, and achieving quantum dominance requires not only raw power, but also lower error rates. Since the concept was first proposed, quantum has always been illusory, and some people even doubt whether it is theoretically feasible. But in 2019, Google found it closer than anyone thought.

With every new improvement in Google's quantum chips, power growth is different from anything else in nature. Although traditional computing power has grown exponentially (according to Moore's Law), Google's quantum computing power has grown exponentially. If this trend continues, then practical quantum computing will emerge next year. People have planned use cases for better medicines, better batteries, new forms of AI to new materials.

Whether Google's quantum technology can continue to grow rapidly and expand effectively remains questionable. But there is no chance in this industry. Researchers are already exploring ways to redesign key digital infrastructures (such as encryption) for the post-quantum world. Heavyweights like IBM and Intel are rushing forward with their quantum devices. They may arrive there in 2020, or they may not. Or they may arrive there at the same time in a true quantum way and arrive there at different times. In any case, this will be a fascinating day for computer engineering.

Internet of Things: Bundle everything

In short, the Internet of Things (IoT) allows technical devices to communicate with each other. There are some harmless apps, such as your thermostat adjusting itself or your fridge detecting that you don't have milk and ordering more products from Amazon. But it also allows companies to optimize supply chains, turn inanimate objects into valuable data, and let cars travel by car (not to mention letting ships sail, or even flying airplanes). The full potential of releasing IoT devices has been the holy grail of computer engineers for many years, but the biggest obstacle to the growth of the Internet of Things (less than real-time data access, limited bandwidth and outdated operating systems) will jump in 2020.

IoT devices generate large amounts of data that need to be processed through the data center. Managing traffic across cloud servers reduces speed to below real-time speed, which is critical for more innovative applications for the Internet of Things. The calculation solves this problem by moving the calculations and data storage closer to the desired location, increasing processing speed and retaining bandwidth. With the proliferation of edge nodes on mobile towers, edge computing provides real-time communication for autonomous vehicles, home automation systems and smart cities to support IoT.

For a long time, bandwidth has been the bane of IoT developers. The current capabilities of Wi-Fi and 4G simply cannot handle the real-time communication load required for sensors to be linked together in a meaningful way. However, with the rise of 5G telecommunications networks, a key turning point in the development of the Internet of Things may come. Some cities (Denver, Chicago, Minneapolis) have been established and will be fully promoted nationwide by 2020.

Operating systems like Windows and iOS were developed before the IoT became the focus, so newer IoT applications are like round holes. That's why in April 2019, Microsoft acquired Express Logic, a real-time operating system (RTOS) for IoT devices and edge computing powered by a microcontroller unit (MCU). At the time of the acquisition, the RTOS has deployed more than 6 billion.

That is just the beginning. Heavyweight companies in other industries are looking to buy or develop their own IoT operating systems. Research firm Gartner predicts that by 2020, there will be more than 20 billion connected devices, with more than 9 billion MCUs deployed each year. In less than a year, the total market size of industrial IoT equipment is expected to exceed $120 billion. This means that more and more people are talking about the Internet of Things - the Internet of Things is talking more and more.

Subscribe to receive free email updates:

0 Response to "Technology Show 2020: Development of Computer Engineering"

Posting Komentar