IEEE Account

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

The IEEE Circuits and Systems Society is the leading organization that promotes the advancement of the theory, analysis, computer-aided design and practical implementation of circuits, and the application of circuit theoretic techniques to systems and signal processing. The Society brings engineers, researchers, scientists and others involved in circuits and systems applications access to the industry’s most essential technical information, networking opportunities, career development tools, and many other exclusive benefits. 

More Information

Visit CASS MiLe

CASS MiLe Logo

The IEEE International Symposium on Circuits and Systems (ISCAS) is the flagship conference of the IEEE Circuits and Systems (CAS) Society and the world’s premiere forum for researchers in the active fields of theory, design and implementation of circuits and systems. This is accomplished through technical conference sessions, poster sessions, live demonstration sessions, and publication of conference papers. ISCAS 2024 is inspired by the theme "circuits and systems for sustainable development", which is perfectly aligned with the host city's goal.

2024 IEEE CAS Seasonal School on Circuit and Systems Wearable Technology

2024 ieee 6th international conference on artificial intelligence circuits and systems, 2024 ieee international symposium on circuits and systems, 2023 international vlsi symposium on technology, systems and applications.

The 2023 International VLSI Symposium on Technology, Systems, and Applications will be held in the Ambassador Hotel Hsinchu, Taiwan  April 17-20, 2023 . Established in 1983, the Symposium has been the premier event on VLSI in Taiwan and a leading technology symposium in the world for nearly 40 years. To address the needs of the ever-changing semiconductor industry and the tight coupling between technology and design, VLSI-TSA and VLSI-DAT will be merged into one Symposium in 2023. Organized by  Industrial Technology Research Institute (ITRI) , the International VLSI Symposium on Technology, Systems, and Applications is proud to create an annual platform for technical exchanges by experts from all over the world on the advancements in semiconductor research, development, and manufacturing. 

Join us at the Symposium technical programs as well as experience historic Hsinchu city, in which the technical and cultural atmosphere have been harmoniously connected. 

Hsinchu City The well-known Taiwan Silicon Valley is based on clusters of world-class high Tech. IC manufacturing and design. It is a one-hour drive from Taipei, and 40 minutes from the airport. Industry attendees may take this opportunity to visit the Science Park business unit and academia attendees may visit major universities/institutes of Taiwan, either in Hsinchu or down to the south by the high-speed train within one hour. Join us at the 2023 VLSI-TSA technical programs as well as experience historic Hsinchu city, in which the technical and cultural atmosphere have been harmoniously connected.

Call for Papers Before submitting your abstract/paper, please review the information on IEEE Intellectual Property Rights .   For an accepted paper to be published in the proceedings, one of its co-author(s) MUST register and attend the symposium to present the paper. Note that each accepted paper shall be accompanied by a distinct registration; that is, two registrations are required for presenting two papers even if the presenter is the same.   Presentation of accepted papers at the symposium must be in English. The final manuscripts of all accepted papers will be published as submitted in the proceedings.   No-show papers will not be included in the symposium proceedings and will not be submitted to the IEEE Xplore database.   COVID-19 Watch The 2023 International VLSI Symposium on Technology, Systems and Applications is planned for an in-person/on-site event. In the meantime, we will keep Hybrid symposium (same as 2022) if COVID-19 still keeps a threat to the convention, i.e., arranging overseas presenters to provide a pre-recorded video for presentation on condition that they render into travel restrictions.

Important Dates

  • Deadline for Paper Submission - 1 November 2022
  • Deadline for Author Registration - 28 February 2023

Submit a Paper

                                         Submission to TSA                 Submission to DAT

vlsi research papers 2023

  • Conference Detailed Schedule
  • User Design Track
  • Organizing Committee
  • Exhibitor Information

vlsi research papers 2023

Call for papers

Important announcement for selected paper authors:.

Camera Ready Instructions:  Click to Download Template

Presentation Template: Click to Download Template

Poster Template:   Click to Download Template

Important Dates:

✓ Final Deadline for Full Paper Submission: Closed ✓ Notification for Acceptance: Closed

✓ Final Deadline for Full Paper Submission:  Closed ✓ Notification for Acceptance: Closed

Hardware for AI and ML:

Chips demonstrating system, architecture and circuit innovations for machine learning and artificial intelligence, ai accelerator design, ai boosted circuits and systems for brain machine interface, memory centric accelerator design, low power autonomous systems., embedded systems design:, esl, system-level design methodology, processor and memory design, concurrent interconnect, networks-on-chip, defect tolerant architectures, hardware/software co-design & verification, reconfigurable computing, embedded multicores soc and systems, embedded software including operating systems, firmware, middleware, communication, virtualization, encryption, compression, security, reliability; hybrid systems-on-chip, embedded for automotive and electric vehicles, analog & mixed signal circuits:, amplifiers, comparators, oscillators, filters, references; nonlinear analog circuits; digitally-assisted analog circuits; analog design at lower technology nodes, analog circuits for various applications, data converters, high speed interfaces., low power digital architectures:, next generation digital circuits, building blocks, and complete systems (monolithic, 2.5d, and 3d) for reduced power and form factor, near- and sub-threshold systems, emerging applications, digital circuits for intra-chip communication, clock distribution, variation-tolerant design, digital regulators and digital sensors., photonic integrated circuits:, silicon and iii-v photonic integrated circuits, waveguides/interconnects, on-chip lasers, optical multiplexers/demultiplexers, photo detectors and sensors, quantum photonics, rf photonics, mid-ir/thz photonics, heterogeneous integration, packaging of photonic devices, quantum photonics, rf photonics, mid-ir and thz photonics., advances in cad for vlsi:, logic and behavioural synthesis, logic mapping, simulation and formal verification, physical design techniques, post route optimizations, simulation tools for design verification, static timing and timing exceptions, mixed signal simulations in sub 10 nm nodes., rf circuits and wireless systems:, rf, mm-wave and thz transceivers, socs, and sips. frequency synthesizers, system architecture for 5g and 6g wireless, next generation systems for radar, sensing, and imaging. reliability aspects in rfics., latest trends in device design and modelling:, deep nanoscale cmos devices, device modeling and simulation, multi-domain simulation, device/circuit-level reliability and variability, devices for beyond cmos, compact modeling and novel tcad solutions., wireline and optical communication circuits and systems:, receivers/transmitters/transceivers for wireline systems, exploratory i/o circuits for advancing data rates, bandwidth density, power efficiency, equalization, robustness, adaptation capability, and design methodology; building blocks for wireline transceivers (such as agcs, analog and adc/dac-based front ends, equalizers, clock generation and distribution circuits including plls, line drivers, and hybrids)., beyond 2d in packaging and interconnects:, wafer-level packaging, embedded chip packaging, 2.5d/3d integration, silicon, sic & glass interposer, thermal characterization and simulation, component, system and product level thermal management and characterization, au/ag/cu/al wire-bond / wedge bond technology, flip-chip & cu pillar, solder alternatives, cu to cu, wafer-level bonding & die attachment (pb-free), fan-out, panel-level, chiplets, sip, micro-bump, high i/o thermocompression/hybrid bonding, fine-pitch/multi-layer rdl, printable interconnects., sensors interfacing circuits and systems:, sensor interfacing, instrumentation, biomedical circuits and healthcare systems, low noise circuits, emi immune design, auto calibration techniques, wearable electronics, flexible electronics, ultra-low power circuit techniques, circuits and systems for iot., power and energy management:, power management and control circuits, regulators; power converter ics, energy harvesting circuits and systems; wide-bandgap topologies and gate-drivers; power and signal isolators, power management for automotive systems, battery management circuits and systems., neuromorphic circuits and systems:, device, circuit, architecture design, analysis and optimization for neuromorphic computing systems, complexity and scalability of neuromorphic computing, emerging technologies for brain-inspired nano-computing and communication, applications of neuromorphic computing in embedded and iot devices, unmanned vehicles and drones, and cyber-physical systems., test and reliability:, simulation, formal verification, validation at different abstraction levels, dft, fault modelling and simulation, atpg, bist, fault tolerance, post-silicon validation and debug, delay test, memory test, reliability testing., please note:.

All papers must be in PDF format only, with save-able text.

  • Each paper must be no more than 6 pages (including the abstract, figures, tables, and references), double-columned in IEEE Format
  • Your submission must not include information that serves to identify the authors of the manuscript, such as name(s) or affiliation(s) of the author(s), anywhere in the manuscript, abstract, or in the embedded PDF data. References and bibliographic citations to the author(s) own published works or affiliations should be made in the third person.
  • Submissions not adhering to these rules, or determined to be previously published or simultaneously submitted to another conference, or journal, will be summarily rejected.
  • The TPC Chairs reserve the right to finally reject any manuscripts not adhering to these rules.

For questions Contact: [email protected]

vlsi research papers 2023

Subscribe to VLSID 2023 Updates

Terms & Conditions | Privacy Policy | Refund Policy | Contact Us | About Us

© 2022-2023, VLSID. All Rights Reserved.

Made proudly in india with ❤ by www.confispace.com.

vlsi research papers 2023

  • All Research Labs
  • 3D Deep Learning
  • Applied Research
  • Autonomous Vehicles
  • Deep Imagination
  • New and Featured
  • AI Art Gallery
  • AI & Machine Learning
  • Computer Vision
  • Academic Collaborations
  • Government Collaborations
  • Graduate Fellowship
  • Internships
  • Research Openings
  • Research Scientists
  • Meet the Team

Research Areas

Circuits and vlsi design, associated publications, researchers.

Technical Sponsors

ACM

Many Congratulations to the GLSVLSI 2023 Best Papers and Best Poster/LBR Award Winners:

Best paper 1st place.

Bit-Stream Processing with No Bit-Stream: Efficient Software Simulation of Stochastic Vision Machines Sercan Aygun, M. Hassan Najafi, Mohsen Imani, and Ece Olcay Gunes

Best Paper 2nd Place

IMA-GNN: In-Memory Acceleration of Centralized and Decentralized Graph Neural Networks at the Edge Mehrdad Morsali, Mahmoud Nazzal, Abdallah Khreishah and Shaahin Angizi

Best Paper 3rd Place

SCRAMBLE-CFI: Mitigating Fault-Induced Control-Flow Attacks on OpenTitan Pascal Nasahl and Stefan Mangard

Best Poster/LBR Award 1st Place

Noise-Resilient and Reduced Depth Approximate Adders for NISQ Quantum Computing Bhaskar Gaur, Travis Humble and Himanshu Thapliyal

Best Poster/LBR Award 2nd Place

Statistical Weight Refresh System for CTT-Based Synaptic Arrays Samuel Dayo, Ataollah Saeed Monir, Mousa Karimi and Boris Vaisband

The 34 th edition of GLSVLSI will be held as an in-person conference. Original, unpublished papers describing research in the general areas of VLSI and hardware design are solicited. Please visit http://www.glsvlsi.org/ for more information.

Program Tracks

  • VLSI Circuits and Design: ASIC and FPGA design, microprocessors/micro-architectures, embedded processors, high-speed/low-power circuits, analog/digital/mixed-signal systems, NoC, SoC, IoT, interconnects, memories, bio-inspired and neuromorphic circuits and systems, BioMEMs, lab-on-a-chip, biosensors, CAD tools for biology and biomedical systems, implantable and wearable devices, machine-learning for design and optimization of VLSI circuits and design, analog/digital/mixed-signal circuits.
  • IoT and Smart Systems: circuits, computing, processing, and design of IoT and smart systems such as smart cities, smart healthcare, smart transportation, smart grid, cyber-physical systems, edge computing, machine learning for IoT, TinyML.
  • Computer-Aided Design (CAD): hardware/software co-design, high-level synthesis, logic synthesis, simulation and formal verification, layout, design for manufacturing, algorithms and complexity analysis, physical design (placement, route, CTS), static timing analysis, signal and power integrity, machine learning for CAD and EDA design.
  • Testing, Reliability, Fault-Tolerance: digital/analog/mixed-signal testing, reliability, robustness, static/dynamic defect- and fault-recoverability, variation-aware design, learning-assisted testing.
  • Emerging Computing & Post-CMOS Technologies: nanotechnology, quantum computing, approximate and stochastic computing, sensor and sensor networks, post CMOS VLSI.
  • Hardware Security: trusted IC, IP protection, hardware security primitives, reverse engineering, hardware Trojans, side-channel analysis, CPS/IoT security, machine learning for HW security.
  • VLSI for Machine Learning and Artificial Intelligence: hardware accelerators for machine learning, novel architectures for deep learning, brain-inspired computing, big data computing, reinforcement learning, cloud computing for Internet-of-Things (IoT) devices.
  • Microelectronic Systems Education: Pedagogical innovations using a wide range of technologies such as ASIC, FPGA, multicore, GPU, TPU, educational techniques including novel curricula and laboratories, assessment methods, distance learning, textbooks, and design projects, Industry and academic collaborative programs and teaching.

Important Dates:

Keynote speakers.

  • VLSI Circuits and Design: ASIC and FPGA design, microprocessors/micro-architectures, embedded processors, high-speed/low-power circuits, analog/digital/mixed-signal systems, NoC, SoC, IoT, interconnects, memories, bio-inspired and neuromorphic circuits and systems, BioMEMs, lab-on-a-chip, biosensors, CAD tools for biology and biomedical systems, implantable and wearable devices, machine-learning for design and optimization of VLSI circuits and design.
  • IoT and Smart Systems: Circuits, computing, processing, and design of IoT and smart systems such as smart cities, smart healthcare, smart transportation, smart grid etc.; cyber-physical systems, edge computing, machine learning for IoT, TinyML, cloud computing for IoT devices.
  • Computer-Aided Design (CAD): Hardware/software co-design, high-level synthesis, logic synthesis, simulation and formal verification, layout, design for manufacturing, algorithms and complexity analysis, physical design (placement, route, CTS), static timing analysis, signal and power integrity, machine learning for CAD and EDA design.
  • Testing, Reliability, Fault-Tolerance: Digital/analog/mixed-signal testing, reliability, robustness, static/dynamic defect- and fault-recoverability, variation-aware design, learning-assisted testing.
  • Emerging Computing & Post-CMOS Technologies: Nanotechnology, quantum computing, approximate and stochastic computing, sensor and sensor networks, post CMOS VLSI.
  • Hardware Security: Trusted IC, IP protection, hardware security primitives, reverse engineering, hardware Trojans, side-channel analysis, CPS/IoT security, machine learning for HW security.
  • VLSI for Machine Learning and Artificial Intelligence: Hardware accelerators for machine learning, novel architectures for deep learning, brain-inspired computing, big data computing, reinforcement learning.
  • Microelectronic Systems Education: Pedagogical innovations using a wide range of technologies such as ASIC, FPGA, multicore, GPU, TPU, educational techniques including novel curricula and laboratories, assessment methods, distance learning, textbooks, design projects, and industry/academic collaborative programs and teaching.

Paper Submission: Authors are invited to submit full-length 6-page (with 2 extra pages for an additional fee), original, unpublished papers along with an abstract of at most 200 words. To enable blind review, the author list should be omitted from the main document. Previously published papers or papers currently under review for other conferences/journals should NOT be submitted and will not be considered. Electronic submission in PDF format to the http://www.glsvlsi.org website is required. Author and contact information (name, affiliation, mailing address, telephone, fax, e-mail) must be entered during the submission process.

Paper Format (camera-ready): Submissions should be in camera-ready two-column format, following the ACM proceedings specifications located at: ACM Template and the classification system detailed at: ACM 2012 Class .

For Overleaf users, please find the following template: ACM Proceedings Template - Overleaf . For LaTeX users, please find the following ZIP file: acmart-primary.zip . For Word users, please find the following template: interim-layout.docx .

Paper Publication and Presenter Registration: Papers will be accepted for regular or poster presentation at the symposium. Every accepted paper MUST have at least one author registered to the symposium by the time the camera-ready paper is submitted; at least one of the authors is also expected to attend the symposium and present the paper.

By submitting your article to an ACM Publication, you are hereby acknowledging that you and your co-authors are subject to all ACM Publications Policies, including ACM's new Publications Policy on Research Involving Human Participants and Subjects. Alleged violations of this policy or any ACM Publications Policy will be investigated by ACM and may result in a full retraction of your paper, in addition to other potential penalties, as per ACM Publications Policy.

Please ensure that you and your co-authors obtain an ORCID ID, so you can complete the publishing process for your accepted paper. ACM has been involved in ORCID from the start and we have recently made a commitment to collect ORCID IDs from all of our published authors. The collection process has started and will roll out as a requirement throughout 2022. We are committed to improve author discoverability, ensure proper attribution and contribute to ongoing community efforts around name normalization; your ORCID ID will help in these efforts.

This site is maintained by: GLSVLSI 2024 Webmaster Anahita Asadi ( [email protected] ) and Yaroslav Popryho ( [email protected] ) University of Illinois Chicago

Emerging VLSI Trends in 2023

  • by Maven Silicon
  • July 19, 2023
  • 3 minutes read

Emerging VLSI Trends in 2023

Looking for the latest VLSI trends and VLSI jobs in 2023? Maven Silicon, a leading VLSI training institute, is here to guide you. VLSI is revolutionizing industries with its ability to integrate millions of transistors onto a single chip. In this blog post, we’ll explore the emerging VLSI trends in 2023 that are shaping the future and highlight the exciting job openings in this field. Discover the benefits of pursuing a career in VLSI and how Maven Silicon can help you kick-start your journey.

VLSI Application & Trends in 2023

The applications of VLSI span across various industries, including telecommunications, automotive, healthcare, and artificial intelligence. As we move into 2023, several VLSI trends are making waves:

AI-driven VLSI

Artificial Intelligence (AI) has merged with VLSI, opening up endless possibilities. AI-driven VLSI solutions have gained significant traction in industries like autonomous vehicles, robotics, smart homes, and beyond. The integration of AI algorithms directly into VLSI chips allows for the real-time processing of massive amounts of data, leading to intelligent decision-making and unprecedented levels of efficiency. This trend empowers autonomous vehicles to analyze complex surroundings, robots to navigate dynamically changing environments, and smart homes to adapt to residents’ preferences seamlessly. The synergy between AI and VLSI has propelled us toward a new era of intelligent and responsive technologies.

IoT and VLSI

The Internet of Things (IoT) revolution is in full swing, and VLSI plays a pivotal role in shaping this interconnected ecosystem. Emerging trends in VLSI focus on designing chips optimized for IoT-enabled devices, ensuring efficient data communication, low power consumption, and enhanced security. These specialized VLSI chips enable IoT devices to communicate seamlessly over the internet, exchanging data with other devices and cloud services. Moreover, with advancements in low-power design techniques, IoT devices can operate for extended periods on battery power, making them more practical and environmentally friendly. VLSI’s contribution to IoT is driving the proliferation of smart homes, smart cities, and industrial automation, transforming the way we interact with our surroundings.

Edge Computing and VLSI

Edge computing has emerged as a game-changer in handling real-time data processing and analysis. VLSI’s role in this trend is crucial, as it enables the development of high-performance, energy-efficient chips tailored for edge devices. By processing data locally at the edge, these VLSI chips significantly reduce latency and response times, making them ideal for applications that demand immediate results. Edge devices, such as sensors and cameras, benefit from low-power VLSI solutions that allow for prolonged operation without compromising performance. The combination of edge computing and VLSI has unlocked a new realm of possibilities, from responsive AI applications to smart infrastructure like traffic management and environmental monitoring.

Benefits of VLSI

Exciting and challenging work.

The field of VLSI indeed provides a dynamic and intellectually stimulating work environment for engineers and professionals. As a VLSI engineer, you get the opportunity to be at the forefront of designing complex integrated circuits that power a wide range of electronic devices, from smartphones and computers to IoT devices and automotive electronics.

Also read: Why VLSI is Used?

Lucrative Job Opportunities

The demand for VLSI professionals is on the rise, making it a highly sought-after field with numerous job opportunities across various industries. As technology continues to advance and electronic devices become an integral part of our lives, the need for skilled VLSI engineers has grown significantly.

Positions such as VLSI Design Engineer, Verification Engineer, and Physical Design Engineer are in high demand. VLSI Design Engineers are responsible for designing and architecting integrated circuits, while Verification Engineers focus on validating and testing chip designs. Physical Design Engineers, on the other hand, play a crucial role in implementing the circuit layout to optimize performance and power consumption.

Also read: Skills required to become a VLSI engineer?

Job Openings

If you’re eager to embark on a VLSI career, numerous job openings await you. Maven Silicon is renowned for its VLSI training with 100% placement assistance. Explore exciting roles like VLSI Design Engineer, Verification Engineer, Physical Design Engineer, FPGA Engineer, and Analog/Mixed-Signal Design Engineer.

Also read: Salary of VLSI Engineers in India

As we step into 2023, the world of VLSI presents abundant opportunities. Stay updated with the latest VLSI trends, leverage the benefits of this field, and secure a rewarding career in VLSI. Maven Silicon can equip you with the necessary skills to excel in the ever-evolving VLSI landscape. Start your journey towards a successful VLSI career today with our job-oriented courses .

Share This Post:

[…] Discover the emerging VLSI trends in 2023, from groundbreaking innovations to cutting-edge advancements. Stay ahead in the world of technology.  […]

Comments are closed.

Related Post

VLSI's Role in Advancing Sensors and Smart Devices

VLSI Role in Advancing Sensors and Smart Devices

How VLSI is used to improve battery life in mobile devices

How VLSI is used to improve battery life

Challenges in Modern SoC Design Verification

Challenges in Modern SoC Design Verification

Refer your friends to Maven, you'll get 20% off once they enroll in out course.

Your friends name

Your friends email id

Have Doubts?

Why should i do vlsi training.

All the Integrated Chips we use in mobiles, TVs, computers, satellites, and automobiles, etc. are designed with VLSI technology. Hence, there is a huge scope and growth in the VLSI Industry and it is full of job opportunities. Since there is a huge gap between what the college education offers and the industry expectation, it is recommended to go for the VLSI training which bridges that gap and gives you a great hands-on experience.

What is chip designing?

Steps involved in Chip design Chip’s architecture: Create circuit designs, Run simulations, Supervise layout, Tape out the chip to the foundry and Evaluate the prototype once the chip comes back from the laboratory. Chip designers work to make faster, cheaper and more innovative chips that can automate parts or the entire function of electronic devices. A chip design engineer’s job involves architecture, logic design, circuit design and physical design of the chip, testing, and verification of the final product.

Is VLSI a good career?

VLSI is a very good domain to build a career with a huge number of opportunities. There is a demand for chips in every sector, be it automobiles, consumer electronics or high-end servers. You should have good command on Verilog, SystemVerilog, and UVM to start your career as VLSI Design or VLSI Verification Engineer

What is the eligibility for VLSI Chip Designing Courses?

The undergraduates, graduates, or postgraduates from below streams can take up VLSI Chip Design Course and make a career in VLSI Industry. BE/BTech in EEE/ECE/TE or ME/MTech/MS in Electronics/MSc Electronics

To join the industry as a VLSI verification engineer, you must have hands-on experience of below topics: SystemVerilog, Universal Verification Methodologies UVM, Assertion based Verification SVA

Maven Silicon provides the best quality VLSI training through a variety of design and verification courses to suit your need and demand. We offer online VLSI courses, Job-oriented fulltime and Blended VLSI courses, Internship programs, part time courses and corporate training.Explore our offerings at https://www.maven-silicon.com/

Every course has a different admission procedure: 1. For Advanced VLSI Design and Verification course at Maven Silicon, you can apply while you are in the final semester, graduation or post-graduation. 2. For the Internship program, you can apply in your pre-final/final year. Advise you to book your seats in advance, pertaining to limited admissions and increased demand. 3. You can subscribe to our online courses directly from our elearn portal https://elearn.maven-silicon.com/ You can apply for our Online, Job-oriented, Part-time and Corporate courses on https://www.maven-silicon.com/application

We do have an entrance exam for our job-oriented courses VLSI RN and VLSI VM. After you meet the eligibility criteria you have to undergo an Online Entrance Test which would check you on the concepts of Basic Electronics and Digital Electronics. Post scoring 60% in this test, you are processed for the technical interview with our technical experts. Based on your performance during the interview, you will be selected for the Advanced VLSI Design and Verification course. For our online VLSI courses, we do not have any entrance exams. You can directly subscribe the courses from our elearn portal https://elearn.maven-silicon.com/

Yes, we do provide the scholarship on our job-oriented courses VLSI RN and VLSI VM based on your performance in the technical interview. To excel in the Online entrance test and the technical interview, we suggest you take our Online Digital electronics course at https://elearn.maven-silicon.com/digital-electronics This online Digital electronics course will help you to learn and refresh the complete fundamentals of digital electronics, highly needed for any VLSI course. Contact us for more details.

We provide 100% placement assistance with our job-oriented course until you get placed. You can refer the link for the placement updates and know more about our hiring partners: https://www.maven-silicon.com/placement

VLSI Frontend course imparts training in the Design and Verification of a chip which mostly includes RTL(Register Transfer Level) coding using either VHDL/Verilog/SystemVerilog and the verification of the DUT(can be an IP or SOC) by building verification Environment or Testbench using SystemVerilog/UVM/.You also learn to meet the timing constraints of the chip using STA(Static Timing Analysis) and Synthesizing the design using synthesizable constructs. The maximum number of VLSI job opportunities are available in the Verification segment. Backend courses mostly deal with the physical design part of the chip which includes Floorplan, Map, Place and route and DFT and ATPG scan insertion and checks for the flip flops. It also includes the physical verification part of the chip, memory characterization, analog layout, and design.

Yes. VLSI is a high growth domain with huge job opportunities. Electronics is the basic knowledge required to get into the VLSI industry. Engineers with Electronics background can enter into VLSI Industry easily. The VLSI Course is helpful for ECE/EEE students to learn and build up the skill set as per the Industry requirement to enter the Chip/IC Design and Verification Domain.

Inexpensive courses with the utmost quality are our unique selling points. You can explore our courses at https://elearn.maven-silicon.com/

We help you with support material to enhance your basic knowledge of Digital electronics and perform your best. Our online Digital electronics course will help you to learn and refresh the complete fundamentals of digital electronics, which are highly needed for any VLSI course. Contact us for more details.

We do have online VLSI courses for engineers like you. You can start learning with our hands-on online VLSI courses which comes with labs, project, reference material. We also connect with live Q&A, doubt clarification sessions and Whatsapp support group. Click here to explore and subscribe https://elearn.maven-silicon.com/ . If you are looking for online VLSI course with Placement support, then you refer our Blended VLSI learning program at https://www.maven-silicon.com/blended-vlsi-design-asic-verification

We always encourage you to join the course along with friends because it motivates you to learn and finish the course at a fast pace. Contact us to know about group discount options.

Yes. It is good to start early. You can explore and subscribe to our online VLSI design methodologies course or our Internship Program. It is a front-end VLSI course that imparts the VLSI Design Flow, Digital Design and RTL programming using Verilog HDL. After completing the online VLSI DM course/Internship Program, you can easily crack college campus interviews or you can also take up our Advanced ASIC Verification course with 100% placement assistance and can avail up to 100% scholarship based on your grades in our Online VLSI Design Course and the scores of technical interview with our experts.

Yes, we have part-time/Weekend VLSI courses for working professionals. They are specially designed to help you strike a balance between your job and learning. Explore VLSI DM and VM part-time course under Part-time VLSI course in Program offerings at our website https://www.maven-silicon.com/systemverilog-uvm-functional-verification-course

Our Job oriented VLSI courses are highly effective and rigorous programs and follow a continuous evaluation scheme. Candidates are evaluated in the courses through lab reports, project reports, practice tests, assignments, technical presentations, and mock interviews. We also have an evaluation program in our Online VLSI courses through quizzes, tests, and assignments.

You do not need to pay extra for the requisite learning material. We do provide free library access and free online VLSI Courses to our trainees enrolled for job oriented courses for reference and support.

Once you complete your online VLSI course you can upgrade to job oriented VLSI Courses with a very good scholarship. We provide 100% placement assistance for the job oriented VLSI Courses. Advanced VLSI Design and Verification [VLSI – RN ] and Advanced ASIC Verification [ VLSI-VM ] are the job oriented VLSI Courses.

Maven Silicon offers customized in-house and onsite corporate VLSI training courses. This program is specially designed for engineers keeping in view the ever-changing demands of the industry. The participants are equipped with the latest tools, techniques, and skills needed to excel as Verification Engineers. Some of our Corporate training VLSI Courses are SystemVerilog HVL, Verilog HDL, Universal Verification Methodology and Assertion based Verification. Click here for more details: https://www.maven-silicon.com/corporate-training

Yes. Our courses will be very useful. We have had many students taking up our course before going to foreign universities for their Master’s program in VLSI. The practical approach of the courses could help them get campus job opportunities and assistantships..

You can opt for online or offline course but you must choose the right mode considering the time you can spend and the flexibility you need. The online course also provides you Live Q&A, doubt clarification, handy technical support and reference material. So, it is a great offering with best of both worlds. You can learn on the go along with your college studies/ regular office hours and upskill yourself. With Maven Silicon’s Online Verification course, you can master VLSI even if you stay in a remote corner of the world.

Steps involved in Chip design Chip’s architecture: Create circuit designs, Run simulations, Supervise layout, Tape out the chip to the foundry and Evaluate the prototype once the chip comes back from the laboratory. Chip designers work to make faster, cheaper and more innovative chips that can automate parts or the entire function of electronic devices. A chip design engineer’s job involves architecture, logic design, circuit design and physical design of the chip, testing, and verification of the final product.

We do have online VLSI courses for engineers like you. You can start learning with our hands-on online VLSI courses which comes with labs, project, reference material. We also connect with live Q&A, doubt clarification sessions and Whatsapp support group. Click here to explore and subscribe https://elearn.maven-silicon.com/ . If you are looking for online VLSI course with Placement support, then you refer our Blended VLSI learning program at https://www.maven-silicon.com/blended-vlsi-design-asic-verification

Once you complete your online VLSI course you can upgrade to job oriented VLSI Courses with a very good scholarship. We provide 100% placement assistance for the job oriented VLSI Courses. Advanced VLSI Design and Verification [VLSI – RN ] and Advanced ASIC Verification [ VLSI-VM ] are the job oriented VLSI Courses.

You can opt for online or offline course but you must choose the right mode considering the time you can spend and the flexibility you need. The online course also provides you Live Q&A, doubt clarification, handy technical support and reference material. So, it is a great offering with best of both worlds. You can learn on the go along with your college studies/ regular office hours and upskill yourself. With Maven Silicon’s Online Verification course, you can master VLSI even if you stay in a remote corner of the world.

  • Why Maven Silicon
  • Success Stories
  • Training Calender
  • Advanced VLSI Design and Verification Course – [VLSI RN]
  • Advanced ASIC Verification Course – [VLSI VM]
  • Advanced VLSI Design and DFT Course – [VLSI DFT]
  • Advanced Physical Design Course – [VLSI PD] NEW
  • Online VLSI Design Course
  • Online VLSI Verification Course
  • Advanced ASIC Verification Course – [VLSI VM-PT]
  • VLSI Design Course – [VLSI DM-PT]
  • Free VLSI Courses
  • VLSI jobs for freshers
  • Free VLSI Projects
  • Free VLSI Workshop
  • Maven Podcast

vlsi research papers 2023

  • Hire Talent
  • Corporate Training

vlsi research papers 2023

Download the Maven Learning App

Maven silicon app at Play store

South Taluk, 21/1A, III Floor, MS Plaza, Gottigere Uttarahalli Hobli, Bannerghatta Main Rd, Bengaluru, Karnataka 560076

© Copyright 2023 Maven Silicon, All Rights Reserved. Privacy and Terms

  • Physical Design Interview Questions
  • Physical Design Course
  • VLSI Physical Design Flow
  • VLSI Training Institute in Hyderabad
  • SystemVerilog Assertions
  • Verilog Interview Questions
  • DFT interview question
  • VLSI interview questions
  • SystemVerilog Interview Questions
  • SystemVerilog Course
  • VLSI courses online
  • ASIC Verification
  • UVM Verification
  • Systemverilog Tutorial
  • Internship in electronics
  • VLSI Design
  • VLSI projects
  • VLSI System Design
  • VLSI Internship in Bangalore

Srivastava is a co-general chair of 2024 IEEE HOST Symposium

vlsi research papers 2023

ISR Director and Electrical and Computer Engineering Professor Ankur Srivastava (ECE/ISR) and Mark Tehranipoor of the University of Florida are the co-general chairs of the next IEEE International Symposium on Hardware Oriented Security and Trust (HOST), which will be held in May 2024 in Washington, D.C.

HOST is the premier symposium that facilitates the rapid growth of hardware-based security research and development. Since 2008, HOST has served as the globally recognized event for researchers and practitioners to advance knowledge and technologies related to hardware security and assurance.

The rapid proliferation of computing and communication systems with increasing computational power and connectivity into every sphere of modern life has brought security to the forefront of system design, test, and validation processes. The emergence of new application spaces for these systems in the internet-of-things (IoT) regime is creating new attack surfaces as well as new requirements for secure and trusted system operation. Additionally, the design, manufacturing and the distribution of microchip, PCB, as well as other electronic components are becoming more sophisticated and globally distributed with a number of potential security vulnerabilities.

Hardware plays an increasingly important and integral role in system security with many emerging system and application vulnerabilities and defense mechanisms relating to hardware. HOST aims to facilitate the rapid growth of hardware-based security research and development. The symposium highlights new results in hardware and system security, including techniques, tools, design/test methods, architectures, circuits and applications of secure hardware.

Ankur Srivastava's primary research interests lie in high performance, low power and secure electronic systems and applications such as computer vision, data and storage centers and sensor networks. He has published numerous papers on these topics at prestigious venues. He has been a part of the technical program & organizing committees of conferences such as ICCAD, DAC, ISPD, ICCD, GLSVLSI, HOST and others. He has served as the associate editor for IEEE Transactions on VLSI , IEEE Transactions on CAD and INTEGRATION: VLSI Journal .

Tehranipoor is the Intel Charles E. Young Preeminence Endowed Chair Professor in Cybersecurity and the Chair of the Department of Electrical and Computer Engineering at the University of Florida.

Bipartisan support in Congress for Clark School-led...

May 3, 2023

Bipartisan support in Congress for Clark School-led...

ECE’s Embedded Systems and Internet of Things Program...

April 8, 2021

ECE’s Embedded Systems and Internet of Things Program...

Voice-activated telehealth technology could strengthen...

Voice-activated telehealth technology could strengthen...

Srivastava, Qu part of Department of Defense 'SHIP'...

December 1, 2020

Srivastava, Qu part of Department of Defense 'SHIP'...

Alumnus Domenic Forte is PECASE recipient

July 8, 2019

Alumnus Domenic Forte is PECASE recipient

Agents of Positive Change: Highlighting Women Maryland...

Mar 15, 2024

Agents of Positive Change: Highlighting Women Maryland...

Celebrating Women in Aerospace Engineering: Christine...

Mar 14, 2024

Celebrating Women in Aerospace Engineering: Christine...

Batteries, Building Efficiency, and More: Innovating in...

Batteries, Building Efficiency, and More: Innovating in...

Clark School Research Nominated for “Invention of the...

Clark School Research Nominated for “Invention of the...

Balachandran, Cameron, Yu Receive 2024 MURI Award

Mar 13, 2024

Balachandran, Cameron, Yu Receive 2024 MURI Award

Edo Waks Recipient of 2024 MURI Award

Edo Waks Recipient of 2024 MURI Award

University of Maryland Team Enters XPRIZE Competition...

Mar 12, 2024

University of Maryland Team Enters XPRIZE Competition...

K.J. Ray Liu Elected to National Academy of Engineering

Mar 7, 2024

K.J. Ray Liu Elected to National Academy of Engineering

Alum Astronaut Reports From Space Station: ‘I’m...

Alum Astronaut Reports From Space Station: ‘I’m...

Celebrating Women's History Month 2024

Celebrating Women's History Month 2024

A generative AI reset: Rewiring to turn potential into value in 2024

It’s time for a generative AI (gen AI) reset. The initial enthusiasm and flurry of activity in 2023 is giving way to second thoughts and recalibrations as companies realize that capturing gen AI’s enormous potential value is harder than expected .

With 2024 shaping up to be the year for gen AI to prove its value, companies should keep in mind the hard lessons learned with digital and AI transformations: competitive advantage comes from building organizational and technological capabilities to broadly innovate, deploy, and improve solutions at scale—in effect, rewiring the business  for distributed digital and AI innovation.

About QuantumBlack, AI by McKinsey

QuantumBlack, McKinsey’s AI arm, helps companies transform using the power of technology, technical expertise, and industry experts. With thousands of practitioners at QuantumBlack (data engineers, data scientists, product managers, designers, and software engineers) and McKinsey (industry and domain experts), we are working to solve the world’s most important AI challenges. QuantumBlack Labs is our center of technology development and client innovation, which has been driving cutting-edge advancements and developments in AI through locations across the globe.

Companies looking to score early wins with gen AI should move quickly. But those hoping that gen AI offers a shortcut past the tough—and necessary—organizational surgery are likely to meet with disappointing results. Launching pilots is (relatively) easy; getting pilots to scale and create meaningful value is hard because they require a broad set of changes to the way work actually gets done.

Let’s briefly look at what this has meant for one Pacific region telecommunications company. The company hired a chief data and AI officer with a mandate to “enable the organization to create value with data and AI.” The chief data and AI officer worked with the business to develop the strategic vision and implement the road map for the use cases. After a scan of domains (that is, customer journeys or functions) and use case opportunities across the enterprise, leadership prioritized the home-servicing/maintenance domain to pilot and then scale as part of a larger sequencing of initiatives. They targeted, in particular, the development of a gen AI tool to help dispatchers and service operators better predict the types of calls and parts needed when servicing homes.

Leadership put in place cross-functional product teams with shared objectives and incentives to build the gen AI tool. As part of an effort to upskill the entire enterprise to better work with data and gen AI tools, they also set up a data and AI academy, which the dispatchers and service operators enrolled in as part of their training. To provide the technology and data underpinnings for gen AI, the chief data and AI officer also selected a large language model (LLM) and cloud provider that could meet the needs of the domain as well as serve other parts of the enterprise. The chief data and AI officer also oversaw the implementation of a data architecture so that the clean and reliable data (including service histories and inventory databases) needed to build the gen AI tool could be delivered quickly and responsibly.

Never just tech

Creating value beyond the hype

Let’s deliver on the promise of technology from strategy to scale.

Our book Rewired: The McKinsey Guide to Outcompeting in the Age of Digital and AI (Wiley, June 2023) provides a detailed manual on the six capabilities needed to deliver the kind of broad change that harnesses digital and AI technology. In this article, we will explore how to extend each of those capabilities to implement a successful gen AI program at scale. While recognizing that these are still early days and that there is much more to learn, our experience has shown that breaking open the gen AI opportunity requires companies to rewire how they work in the following ways.

Figure out where gen AI copilots can give you a real competitive advantage

The broad excitement around gen AI and its relative ease of use has led to a burst of experimentation across organizations. Most of these initiatives, however, won’t generate a competitive advantage. One bank, for example, bought tens of thousands of GitHub Copilot licenses, but since it didn’t have a clear sense of how to work with the technology, progress was slow. Another unfocused effort we often see is when companies move to incorporate gen AI into their customer service capabilities. Customer service is a commodity capability, not part of the core business, for most companies. While gen AI might help with productivity in such cases, it won’t create a competitive advantage.

To create competitive advantage, companies should first understand the difference between being a “taker” (a user of available tools, often via APIs and subscription services), a “shaper” (an integrator of available models with proprietary data), and a “maker” (a builder of LLMs). For now, the maker approach is too expensive for most companies, so the sweet spot for businesses is implementing a taker model for productivity improvements while building shaper applications for competitive advantage.

Much of gen AI’s near-term value is closely tied to its ability to help people do their current jobs better. In this way, gen AI tools act as copilots that work side by side with an employee, creating an initial block of code that a developer can adapt, for example, or drafting a requisition order for a new part that a maintenance worker in the field can review and submit (see sidebar “Copilot examples across three generative AI archetypes”). This means companies should be focusing on where copilot technology can have the biggest impact on their priority programs.

Copilot examples across three generative AI archetypes

  • “Taker” copilots help real estate customers sift through property options and find the most promising one, write code for a developer, and summarize investor transcripts.
  • “Shaper” copilots provide recommendations to sales reps for upselling customers by connecting generative AI tools to customer relationship management systems, financial systems, and customer behavior histories; create virtual assistants to personalize treatments for patients; and recommend solutions for maintenance workers based on historical data.
  • “Maker” copilots are foundation models that lab scientists at pharmaceutical companies can use to find and test new and better drugs more quickly.

Some industrial companies, for example, have identified maintenance as a critical domain for their business. Reviewing maintenance reports and spending time with workers on the front lines can help determine where a gen AI copilot could make a big difference, such as in identifying issues with equipment failures quickly and early on. A gen AI copilot can also help identify root causes of truck breakdowns and recommend resolutions much more quickly than usual, as well as act as an ongoing source for best practices or standard operating procedures.

The challenge with copilots is figuring out how to generate revenue from increased productivity. In the case of customer service centers, for example, companies can stop recruiting new agents and use attrition to potentially achieve real financial gains. Defining the plans for how to generate revenue from the increased productivity up front, therefore, is crucial to capturing the value.

Upskill the talent you have but be clear about the gen-AI-specific skills you need

By now, most companies have a decent understanding of the technical gen AI skills they need, such as model fine-tuning, vector database administration, prompt engineering, and context engineering. In many cases, these are skills that you can train your existing workforce to develop. Those with existing AI and machine learning (ML) capabilities have a strong head start. Data engineers, for example, can learn multimodal processing and vector database management, MLOps (ML operations) engineers can extend their skills to LLMOps (LLM operations), and data scientists can develop prompt engineering, bias detection, and fine-tuning skills.

A sample of new generative AI skills needed

The following are examples of new skills needed for the successful deployment of generative AI tools:

  • data scientist:
  • prompt engineering
  • in-context learning
  • bias detection
  • pattern identification
  • reinforcement learning from human feedback
  • hyperparameter/large language model fine-tuning; transfer learning
  • data engineer:
  • data wrangling and data warehousing
  • data pipeline construction
  • multimodal processing
  • vector database management

The learning process can take two to three months to get to a decent level of competence because of the complexities in learning what various LLMs can and can’t do and how best to use them. The coders need to gain experience building software, testing, and validating answers, for example. It took one financial-services company three months to train its best data scientists to a high level of competence. While courses and documentation are available—many LLM providers have boot camps for developers—we have found that the most effective way to build capabilities at scale is through apprenticeship, training people to then train others, and building communities of practitioners. Rotating experts through teams to train others, scheduling regular sessions for people to share learnings, and hosting biweekly documentation review sessions are practices that have proven successful in building communities of practitioners (see sidebar “A sample of new generative AI skills needed”).

It’s important to bear in mind that successful gen AI skills are about more than coding proficiency. Our experience in developing our own gen AI platform, Lilli , showed us that the best gen AI technical talent has design skills to uncover where to focus solutions, contextual understanding to ensure the most relevant and high-quality answers are generated, collaboration skills to work well with knowledge experts (to test and validate answers and develop an appropriate curation approach), strong forensic skills to figure out causes of breakdowns (is the issue the data, the interpretation of the user’s intent, the quality of metadata on embeddings, or something else?), and anticipation skills to conceive of and plan for possible outcomes and to put the right kind of tracking into their code. A pure coder who doesn’t intrinsically have these skills may not be as useful a team member.

While current upskilling is largely based on a “learn on the job” approach, we see a rapid market emerging for people who have learned these skills over the past year. That skill growth is moving quickly. GitHub reported that developers were working on gen AI projects “in big numbers,” and that 65,000 public gen AI projects were created on its platform in 2023—a jump of almost 250 percent over the previous year. If your company is just starting its gen AI journey, you could consider hiring two or three senior engineers who have built a gen AI shaper product for their companies. This could greatly accelerate your efforts.

Form a centralized team to establish standards that enable responsible scaling

To ensure that all parts of the business can scale gen AI capabilities, centralizing competencies is a natural first move. The critical focus for this central team will be to develop and put in place protocols and standards to support scale, ensuring that teams can access models while also minimizing risk and containing costs. The team’s work could include, for example, procuring models and prescribing ways to access them, developing standards for data readiness, setting up approved prompt libraries, and allocating resources.

While developing Lilli, our team had its mind on scale when it created an open plug-in architecture and setting standards for how APIs should function and be built.  They developed standardized tooling and infrastructure where teams could securely experiment and access a GPT LLM , a gateway with preapproved APIs that teams could access, and a self-serve developer portal. Our goal is that this approach, over time, can help shift “Lilli as a product” (that a handful of teams use to build specific solutions) to “Lilli as a platform” (that teams across the enterprise can access to build other products).

For teams developing gen AI solutions, squad composition will be similar to AI teams but with data engineers and data scientists with gen AI experience and more contributors from risk management, compliance, and legal functions. The general idea of staffing squads with resources that are federated from the different expertise areas will not change, but the skill composition of a gen-AI-intensive squad will.

Set up the technology architecture to scale

Building a gen AI model is often relatively straightforward, but making it fully operational at scale is a different matter entirely. We’ve seen engineers build a basic chatbot in a week, but releasing a stable, accurate, and compliant version that scales can take four months. That’s why, our experience shows, the actual model costs may be less than 10 to 15 percent of the total costs of the solution.

Building for scale doesn’t mean building a new technology architecture. But it does mean focusing on a few core decisions that simplify and speed up processes without breaking the bank. Three such decisions stand out:

  • Focus on reusing your technology. Reusing code can increase the development speed of gen AI use cases by 30 to 50 percent. One good approach is simply creating a source for approved tools, code, and components. A financial-services company, for example, created a library of production-grade tools, which had been approved by both the security and legal teams, and made them available in a library for teams to use. More important is taking the time to identify and build those capabilities that are common across the most priority use cases. The same financial-services company, for example, identified three components that could be reused for more than 100 identified use cases. By building those first, they were able to generate a significant portion of the code base for all the identified use cases—essentially giving every application a big head start.
  • Focus the architecture on enabling efficient connections between gen AI models and internal systems. For gen AI models to work effectively in the shaper archetype, they need access to a business’s data and applications. Advances in integration and orchestration frameworks have significantly reduced the effort required to make those connections. But laying out what those integrations are and how to enable them is critical to ensure these models work efficiently and to avoid the complexity that creates technical debt  (the “tax” a company pays in terms of time and resources needed to redress existing technology issues). Chief information officers and chief technology officers can define reference architectures and integration standards for their organizations. Key elements should include a model hub, which contains trained and approved models that can be provisioned on demand; standard APIs that act as bridges connecting gen AI models to applications or data; and context management and caching, which speed up processing by providing models with relevant information from enterprise data sources.
  • Build up your testing and quality assurance capabilities. Our own experience building Lilli taught us to prioritize testing over development. Our team invested in not only developing testing protocols for each stage of development but also aligning the entire team so that, for example, it was clear who specifically needed to sign off on each stage of the process. This slowed down initial development but sped up the overall delivery pace and quality by cutting back on errors and the time needed to fix mistakes.

Ensure data quality and focus on unstructured data to fuel your models

The ability of a business to generate and scale value from gen AI models will depend on how well it takes advantage of its own data. As with technology, targeted upgrades to existing data architecture  are needed to maximize the future strategic benefits of gen AI:

  • Be targeted in ramping up your data quality and data augmentation efforts. While data quality has always been an important issue, the scale and scope of data that gen AI models can use—especially unstructured data—has made this issue much more consequential. For this reason, it’s critical to get the data foundations right, from clarifying decision rights to defining clear data processes to establishing taxonomies so models can access the data they need. The companies that do this well tie their data quality and augmentation efforts to the specific AI/gen AI application and use case—you don’t need this data foundation to extend to every corner of the enterprise. This could mean, for example, developing a new data repository for all equipment specifications and reported issues to better support maintenance copilot applications.
  • Understand what value is locked into your unstructured data. Most organizations have traditionally focused their data efforts on structured data (values that can be organized in tables, such as prices and features). But the real value from LLMs comes from their ability to work with unstructured data (for example, PowerPoint slides, videos, and text). Companies can map out which unstructured data sources are most valuable and establish metadata tagging standards so models can process the data and teams can find what they need (tagging is particularly important to help companies remove data from models as well, if necessary). Be creative in thinking about data opportunities. Some companies, for example, are interviewing senior employees as they retire and feeding that captured institutional knowledge into an LLM to help improve their copilot performance.
  • Optimize to lower costs at scale. There is often as much as a tenfold difference between what companies pay for data and what they could be paying if they optimized their data infrastructure and underlying costs. This issue often stems from companies scaling their proofs of concept without optimizing their data approach. Two costs generally stand out. One is storage costs arising from companies uploading terabytes of data into the cloud and wanting that data available 24/7. In practice, companies rarely need more than 10 percent of their data to have that level of availability, and accessing the rest over a 24- or 48-hour period is a much cheaper option. The other costs relate to computation with models that require on-call access to thousands of processors to run. This is especially the case when companies are building their own models (the maker archetype) but also when they are using pretrained models and running them with their own data and use cases (the shaper archetype). Companies could take a close look at how they can optimize computation costs on cloud platforms—for instance, putting some models in a queue to run when processors aren’t being used (such as when Americans go to bed and consumption of computing services like Netflix decreases) is a much cheaper option.

Build trust and reusability to drive adoption and scale

Because many people have concerns about gen AI, the bar on explaining how these tools work is much higher than for most solutions. People who use the tools want to know how they work, not just what they do. So it’s important to invest extra time and money to build trust by ensuring model accuracy and making it easy to check answers.

One insurance company, for example, created a gen AI tool to help manage claims. As part of the tool, it listed all the guardrails that had been put in place, and for each answer provided a link to the sentence or page of the relevant policy documents. The company also used an LLM to generate many variations of the same question to ensure answer consistency. These steps, among others, were critical to helping end users build trust in the tool.

Part of the training for maintenance teams using a gen AI tool should be to help them understand the limitations of models and how best to get the right answers. That includes teaching workers strategies to get to the best answer as fast as possible by starting with broad questions then narrowing them down. This provides the model with more context, and it also helps remove any bias of the people who might think they know the answer already. Having model interfaces that look and feel the same as existing tools also helps users feel less pressured to learn something new each time a new application is introduced.

Getting to scale means that businesses will need to stop building one-off solutions that are hard to use for other similar use cases. One global energy and materials company, for example, has established ease of reuse as a key requirement for all gen AI models, and has found in early iterations that 50 to 60 percent of its components can be reused. This means setting standards for developing gen AI assets (for example, prompts and context) that can be easily reused for other cases.

While many of the risk issues relating to gen AI are evolutions of discussions that were already brewing—for instance, data privacy, security, bias risk, job displacement, and intellectual property protection—gen AI has greatly expanded that risk landscape. Just 21 percent of companies reporting AI adoption say they have established policies governing employees’ use of gen AI technologies.

Similarly, a set of tests for AI/gen AI solutions should be established to demonstrate that data privacy, debiasing, and intellectual property protection are respected. Some organizations, in fact, are proposing to release models accompanied with documentation that details their performance characteristics. Documenting your decisions and rationales can be particularly helpful in conversations with regulators.

In some ways, this article is premature—so much is changing that we’ll likely have a profoundly different understanding of gen AI and its capabilities in a year’s time. But the core truths of finding value and driving change will still apply. How well companies have learned those lessons may largely determine how successful they’ll be in capturing that value.

Eric Lamarre

The authors wish to thank Michael Chui, Juan Couto, Ben Ellencweig, Josh Gartner, Bryce Hall, Holger Harreis, Phil Hudelson, Suzana Iacob, Sid Kamath, Neerav Kingsland, Kitti Lakner, Robert Levin, Matej Macak, Lapo Mori, Alex Peluffo, Aldo Rosales, Erik Roth, Abdul Wahab Shaikh, and Stephen Xu for their contributions to this article.

This article was edited by Barr Seitz, an editorial director in the New York office.

Explore a career with us

Related articles.

Light dots and lines evolve into a pattern of a human face and continue to stream off the the side in a moving grid pattern.

The economic potential of generative AI: The next productivity frontier

A yellow wire shaped into a butterfly

Rewired to outcompete

A digital construction of a human face consisting of blocks

Meet Lilli, our generative AI tool that’s a researcher, a time saver, and an inspiration

Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

Research Roundup: How the Pandemic Changed Management

  • Mark C. Bolino,
  • Jacob M. Whitney,
  • Sarah E. Henry

vlsi research papers 2023

Lessons from 69 articles published in top management and applied psychology journals.

Researchers recently reviewed 69 articles focused on the management implications of the Covid-19 pandemic that were published between March 2020 and July 2023 in top journals in management and applied psychology. The review highlights the numerous ways in which employees, teams, leaders, organizations, and societies were impacted and offers lessons for managing through future pandemics or other events of mass disruption.

The recent pandemic disrupted life as we know it, including for employees and organizations around the world. To understand such changes, we recently reviewed 69 articles focused on the management implications of the Covid-19 pandemic. These papers were published between March 2020 and July 2023 in top journals in management and applied psychology.

  • Mark C. Bolino is the David L. Boren Professor and the Michael F. Price Chair in International Business at the University of Oklahoma’s Price College of Business. His research focuses on understanding how an organization can inspire its employees to go the extra mile without compromising their personal well-being.
  • JW Jacob M. Whitney is a doctoral candidate in management at the University of Oklahoma’s Price College of Business and an incoming assistant professor at Kennesaw State University. His research interests include leadership, teams, and organizational citizenship behavior.
  • SH Sarah E. Henry is a doctoral candidate in management at the University of Oklahoma’s Price College of Business and an incoming assistant professor at the University of South Florida. Her research interests include organizational citizenship behaviors, workplace interpersonal dynamics, and international management.

Partner Center

IMAGES

  1. VLSI Design Question Paper UPTU

    vlsi research papers 2023

  2. VLSI Semiconductors Market: Global Industry Analysis and Forecast

    vlsi research papers 2023

  3. VTU CMOS VLSI Question Papers

    vlsi research papers 2023

  4. (PDF) Review Paper on Low Power VLSI Design Techniques

    vlsi research papers 2023

  5. RGPV VLSI Paper

    vlsi research papers 2023

  6. VLSI Design Question Paper UPTU

    vlsi research papers 2023

VIDEO

  1. IEEE Transactions on VLSI 2023 Research Papers

  2. IEEE Transactions on VLSI 2021 Research Papers

  3. VDAT 2023

  4. Demystifying VLSI Technology: Exploring Its Future Possibilities

  5. Advanced VLSI Design: 2023-24 Lecture 5 Static Timing Analysis

  6. What is VLSI

COMMENTS

  1. 2023 International VLSI Symposium on Technology, Systems and

    Read all the papers in 2023 International VLSI Symposium on Technology, Systems and Applications (VLSI-TSA/VLSI-DAT) | IEEE Conference | IEEE Xplore

  2. VLSI Symposium

    We also would like to invite you to join "2023 Symposium on VLSI technology and Circuits," which will be held from Sunday, June 11 to Friday, June 16 at Rihga Royal Hotel Kyoto in Japan. 2023 Symposium is planned as a fully in-person event in Kyoto for the first time in four years. On-demand access to technical papers will be available for ...

  3. GLSVLSI 2023, Knoxville, TN, USA

    02/27/2023 GLSVLSI 2023 Call for Late Breaking Research Papers [link] is online. 02/20/2023 The hard deadline for paper submission has been extended to February 27, 2023 @ 11:59pm EST. 02/06/2023 Selected papers from GLSVLSI 2023 program will be invited to special section of IEEE Transactions on Emerging Topics in Computing.

  4. 2023 International VLSI Symposium on Technology, Systems and ...

    The 2023 International VLSI Symposium on Technology, Systems, and Applications will be held in the Ambassador Hotel Hsinchu, Taiwan April 17-20, 2023. Established in 1983, the Symposium has been the premier event on VLSI in Taiwan and a leading technology symposium in the world for nearly 40 years. To address the needs of the ever-changing ...

  5. PDF Technical Highlights from the 2023 Symposium on VLSI Technology and

    The 2023 Symposium on VLSI Technology and Circuits is a premiere international conference that records the pace, progress, and evolution of micro/nano integrated electronics, scheduled ... (Paper T1-4) A research collaboration led by TSMC demonstrates a scenario for low contact resistance at scaled contact lengths in the Sb-MoS 2 system. This ...

  6. PDF The 2023 Symposium on VLSI Technology & Circuits, Evolving VLSI for a

    Tokyo, Japan (APRIL r25, 2023) - For the 43 d consecutive year delivering a unique convergence of microelectronics technology and circuits in one venue, Symposium on VLSI Technology & Circuits will resume as an in-person event in Kyoto, Japan on June 11-16, 2023. The six-day event will take place at the Rihga Royal Hotel Kyoto to

  7. Call For Papers

    All papers must be in PDF format only, with save-able text. Each paper must be no more than 6 pages (including the abstract, figures, tables, and references), double-columned in IEEE Format. Your submission must not include information that serves to identify the authors of the manuscript, such as name (s) or affiliation (s) of the author (s ...

  8. Electronics

    The focus of this Special Issue is on the research challenges related to the design of emerging microelectronics and VLSI circuits and systems that meet the demanding specifications of innovative applications. This Special Issue considers challenges in the fields of low power consumption, small integration area, testing and security, without ...

  9. Electronics

    Feature papers represent the most advanced research with significant potential for high impact in the field. ... (2023) Vol. 11 (2022) Vol. 10 (2021) Vol. 9 (2020) Vol. 8 (2019) ... The focus of this Special Issue is on the research challenges related to the design of emerging microelectronics and VLSI circuits and related systems that meet the ...

  10. Circuits and VLSI Design

    A 17-95.6 TOPS/W Deep Learning Inference Accelerator with Per-Vector Scaled 4-bit Quantization for Transformers in 5nm. , Steve Dai, Stephen Tell, Brian Zimmer, William Dally, Tom Gray, Brucek Khailany. 2022 Symposium on VLSI Technology & Circuits Digest of Technical Papers.

  11. 68784 PDFs

    Explore the latest full-text research PDFs, articles, conference papers, preprints and more on VLSI TECHNOLOGY. ... Oct 2023; Rashmi Samanth ... Graphene nano ribbon field effect transistor is an ...

  12. PDF An Investigation of Low Power VLSI Design Techniques

    An growing portion of integrated circuits' overall power dissipation is being accounted for by leakage current. This paper discusses numerous power management techniques, methodologies, and tactics for low power circuits and systems. Future challenges for designing low power high performance circuits are also discussed.

  13. PDF Paper Submission Deadline: 23:59 JST Wednesday, February 1, 2023

    Symposium Scope. The Symposium calls for papers in the following areas: Advanced CMOS and interconnect technologies. Advanced packaging, 2.5D and 3D integration. Advanced process and material for scaling and new devices. Beyond CMOS, such as spin logic, optical and quantum computing. Biomedical devices, circuits, and systems. Data converters.

  14. Electronics

    Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications. ... Research on the ambient ...

  15. Opportunity and Challenges for VLSI in IoT Application

    Abstract. Internet-of-things (IoT) systems combine sensing, computation, storage, and communication to sense physical systems and respond accordingly. However, larger size chips are not suitable ...

  16. GLSVLSI 2023, Knoxville, TN, USA

    Original, unpublished papers describing research in the general areas of VLSI and hardware design are solicited. Stay tuned for more information. ... Camera-Ready paper due: April 5, 2023 Call for Papers. Paper submission deadline: February 27, 2023 (11:59 EST) (HARD DEADLINE!)

  17. Emerging VLSI Trends in 2023

    Emerging trends in VLSI focus on designing chips optimized for IoT-enabled devices, ensuring efficient data communication, low power consumption, and enhanced security. These specialized VLSI chips enable IoT devices to communicate seamlessly over the internet, exchanging data with other devices and cloud services.

  18. News Story

    He has published numerous papers on these topics at prestigious venues. He has been a part of the technical program & organizing committees of conferences such as ICCAD, DAC, ISPD, ICCD, GLSVLSI, HOST and others. He has served as the associate editor for IEEE Transactions on VLSI, IEEE Transactions on CAD and INTEGRATION: VLSI Journal.

  19. VLSI Design (VLSID)

    Conference Call for Papers VLSID 2023 will be hosted in the best stand-alone convention center, HICC Hyderabad. ... The conference papers explore research in Embedded system alongside concepts in Computer architecture and other areas of study in High-level synthesis. ... International Conference on VLSI Design features CMOS research that ...

  20. A generative AI reset: Rewiring to turn potential into value in 2024

    It's time for a generative AI (gen AI) reset. The initial enthusiasm and flurry of activity in 2023 is giving way to second thoughts and recalibrations as companies realize that capturing gen AI's enormous potential value is harder than expected.. With 2024 shaping up to be the year for gen AI to prove its value, companies should keep in mind the hard lessons learned with digital and AI ...

  21. Research Roundup: How the Pandemic Changed Management

    Researchers recently reviewed 69 articles focused on the management implications of the Covid-19 pandemic that were published between March 2020 and July 2023 in top journals in management and ...