Company name

Company email, our manager will contact you shortly., what is geometric modeling types & applications.

digital representation and analysis of shapes

Representing physical objects in digital form is a given in contemporary industries. Apart from raw sketches for preliminary concept design, any engineer now thinks of objects designed in 2D or 3D digital spaces.

This article explores the fundamental concepts, types, and applications of geometric modeling: geometric modeling of shapes is a crucial aspect of design and visualization, and knowledge of how they are represented helps not only the geometric shapes representation with Computer Aided Design (CAD) innovations but also advanced approaches such as simulation with finite element analysis and optimization algorithms to achieve the desired shape.

Understanding the Basics

Geometric modeling relies on foundational principles, including elementary mathematical concepts and geometric transformations. Readers will appreciate the precision and versatility of theoretical mathematical geometric modeling and its application to computer geometric modeling.

Looking forward to advanced applications, those concepts are indispensable to building basic digital shapes for constructing the framework structure for finite element and fluid dynamic simulation, i.e., computational meshes . Starting from spatial boundaries of objects, 3D simulation software can achieve predictions where analytic functions are not available because of the problem complexity. Starting from wire frame models or solid modeling in CAD programs, simulation programs define polygonal surfaces that represent the "surface mesh" and operate on the filled volume of the shape with various elements (in the case of fluid dynamics, called finite volumes) that are used for lengthy numerical computations. The AI task is to shorten those lengthy numerical computations into real-time predictions.

Back to the basics now! This section explores the elementary concepts forming the backbone of geometric modeling, emphasizing their role in creating and representing virtual objects.

Elementary Mathematical Concepts in Geometric Modeling

Geometric modeling relies on fundamental principles in mathematics and geometric transformations. Central to geometric modeling is a mathematical concept set as a framework for defining and describing digital objects, including points, vectors, curves, and surfaces.

As a fundamental element of modeling, points, in mathematical terms, represent a location in space. In the digital realm, points are building blocks for constructing more complex geometric entities.

Vectors, directional quantities with both magnitude and direction, represent translations and orientations in geometric modeling.

curves and surfaces in 2D and 3D | mathematica.stackexchange.com

Curves are mathematical entities that define the path of a point in space. Represented parametrically, a 2D curve has the following mathematical description:

r(t) = < x(t), y(t), z(t) >

where "t" is a parameter over a specified interval, parametrizing the curve. Curve types can include Bézier curves and B-splines, expressed as weighted combinations of control points.

Finally, surfaces are mathematical constructs that define the boundaries of 3D shapes represented with the following mathematical description of a parametric surface:

r(u,v) = (x(u,v), y(u,v), z(u,v))

where two variables, "u" and "v" parameterize surfaces.

Examples of a surface model include parametric surfaces, defined through functions, and Bézier surfaces, expressed as combinations of control points. These mathematical models facilitate precise control over the shape and characteristics of surfaces in geometric modeling.

Geometric Transformations: Manipulating Digital Entities

Geometric transformations form the backbone of geometric modeling, enabling the dynamic manipulation of digital entities with precision and versatility.

Three fundamental transformations—translation, rotation, and scaling—are pivotal in shaping virtual geometric objects.

Translation shifts geometric entities along a specified direction in space. By applying translation, digital objects can be moved to new positions without altering their shape or orientation.

Rotational transformations revolve around changing the orientation of geometric entities. Engineers and designers use rotations to position objects at specific angles, allowing for exploring design variations.

Scaling transformations alter the size of objects, either enlarging or reducing them proportionally. This operation is instrumental in adjusting dimensions and proportions within the digital space.

Types of Geometric Modeling: A Breakdown

Distinguishing between 2D and 3D modeling is basic to understanding geometric modeling. This section provides a technical overview starting from the underlying mathematical principles.Wireframe Geometric Modeling and Applications

Wireframe modeling, integral to Computer-Aided Design (CAD) as used in CAD programs, is a fundamental technique for engineers and designers. This exploration delves into wireframe geometric modeling, dissecting its characteristics, applications, and limitations.

Wireframe modeling is a foundational geometric modeling approach in CAD software, representing a three-dimensional geometric model through a network of lines and curves. These elements, known as edges, outline the structure of the object's geometry without delving into surface details. It is a "minimalist" representation, focusing solely on the essential framework. This makes wireframe geometric modeling an efficient method for conceptualizing and visualizing complex structures.

Each line or curve holds critical geometric information in wireframe geometric modeling, specifying the connectivity between points and defining the object's edges. This simplicity facilitates rapid model creation and modification, a crucial aspect of the iterative design process .

Wireframe modeling finds extensive applications in product design, providing engineers with a versatile tool to conceptualize and refine their ideas. The simplicity of wireframes allows for quick ideation and exploration of design variations. This is particularly beneficial in the early stages of product development when multiple concepts are being considered.

Engineers use wireframe geometric modeling to define a product's basic geometric object and structure before moving on to more detailed representations. It serves as a visual guide, aiding in evaluating proportions, spatial relationships, and overall aesthetics. Additionally, the lightweight nature of wireframe models makes them computationally efficient, contributing to smoother collaboration and faster design iterations.

One key strength of wireframe modeling is its utility for representing object structures. By focusing on the underlying framework, wireframes convey the spatial arrangement of components within a design.

For complex machinery, architectural structures, or intricate mechanical components, wireframe geometric modeling provides a clear visualization of how different parts interact and fit together.

While wireframe modeling offers efficiency and simplicity, it comes with inherent limitations. One primary drawback is the lack of surface information. Wireframe models cannot convey realistic visualizations of the final product without explicit surface representation. This limitation makes them less suitable for presentations or client-facing visuals where a high level of realism is required.

Another consideration is that wireframe geometric modeling may not accurately capture complex geometries or intricate surface details. Other surface or solid modeling might be more appropriate when surface finish, texture, or intricate curves are crucial to the design.

identifying points which are contained in a polygonal surface | mathematica.stackexchange.com

Surface Modeling: Mathematical Representation and Challenges

The creation of surfaces and their mathematical representation is a technique explored in this section. A discussion of inherent challenges in surface modeling techniques accompanies applications in product design and animation. A technical perspective on the intricacies of surface modeling provides readers with a deeper understanding of its role in achieving realistic digital representations.

Surface modeling is rooted in mathematical representations that define the geometry of an object's external features. Unlike wireframe models, surface models go beyond structural frameworks and aim to capture the complex surfaces that define a product's visual and tactile characteristics.

Mathematically, surfaces can be described using various methods, such as parametric equations, spline curves, and NURBS (Non-Uniform Rational B-splines). Parametric equations define the coordinates of points on the surface in terms of parameters, offering precise control over shape and continuity. Spline curves, including Bézier and B-spline curves, are widely used for smooth and flexible surface representations. NURBS, a mathematical representation using rational functions, excels in representing complex and free-form surfaces.

What are the applications for surface modeling?

Surface modeling finds extensive application in product design, where achieving realistic and aesthetically pleasing surfaces is paramount. Engineers and industrial designers use it to produce a final geometric model for consumer products, automotive components, or electronic devices. Representing smooth curves and intricate details is crucial for designing functional and visually appealing products.

In computer graphics animation, surface modeling is instrumental in creating lifelike characters and environments. By defining the surfaces of 3D models, computer graphics animators can bring virtual worlds to life with realism. Surface models are used to simulate the behavior of light on different materials, allowing for the creation of visually convincing animations.

Designing complex machinery, like those used in the automotive and aerospace industries, involves precision, maintenance, accessibility, and safety challenges. Intricate mathematical calculations and virtual 3D modeling are added to the complexity to ensure precision and accuracy.

Geometric solid modeling allows engineers, architects, and designers to create realistic virtual representations of physical objects as geometric solid models. This technique transforms ideas into representing solid models in 3D, with a final solid shape in the virtual space.

It's a mathematical approach to representing objects in a digital space that introduces the concept of volume, creating shapes with depth, breadth, and height. The fundamental building blocks of this technique are mathematical primitives such as spheres, cubes, cones, and cylinders, which, when assembled, form complex and realistic structures.

Geometric solid modeling is parametric .

Parametric modeling involves defining the object's dimensions and characteristics using parameters. This introduces a level of flexibility that is invaluable in the design process. Engineers and designers can easily modify parameters to alter the model's geometric shapes, sizes, or attributes without starting from scratch.

example of geometric transformation: sculpting of. a surface | download.autodesk.com

Constructive Solid Geometry: Building Complexity from Simplicity

Constructive solid geometry is a vital technique within geometric solid modeling. Constructive solid geometry combines simple geometric primitives through Boolean operations like union, intersection, and subtraction to create more complex geometric shapes.

Constructive solid geometry provides an efficient and intuitive way to generate complex structures with precision.

elementary example of Boolean operations on 3D solids ("red" is the Boolean outcome of "yellow" - "green") | freesion.com

(B-Rep): Capturing Surface Details

In geometric solid modeling, the boundary representation ("B-Rep") method captures the details of an object's surfaces. B-Rep represents the solid by specifying its boundaries, including faces, edges, and vertices. This methodology is particularly advantageous in conveying surface characteristics crucial for visualization and analysis. It enables the creation of solid models with accurate volumes and realistic surface qualities.

Applications Across Industries

Geometric solid modeling finds extensive applications across diverse industries, profoundly impacting fields such as engineering, manufacturing, architecture, and computer geometric modeling (CAD).

In engineering, geometric solid modeling is the cornerstone of product design. Engineers use this technique to create detailed representations of mechanical components, ensuring precision in dimensions and functionality. Whether designing automotive parts, machinery components, or intricate devices, geometric solid modeling provides the accuracy required for prototyping and manufacturing.

Geometric solid models also serve as the blueprint for manufacturing processes. By translating digital models into physical objects through techniques like computer numerical control (CNC) machining or 3D printing, manufacturers can replicate the exact specifications outlined in the geometric solid model. This seamless translation from the digital to the physical realm enhances efficiency and accuracy in production.

Challenges and Future Developments

While geometric solid modeling has revolutionized how we design and manufacture objects, it's not without challenges. Managing complex models with numerous parameters can be computationally demanding, requiring powerful hardware for efficient processing.

Advancements in geometric solid modeling are expected to address these challenges. Integrating artificial intelligence and machine learning may enhance the automation of design processes, allowing systems to generate and optimize geometric models intelligently. Furthermore, developments in real-time rendering technologies will contribute to more immersive and interactive design experiences.

Technical Advancements in Visualization

Architects leverage geometric modeling for visualizing buildings and structures, and this article delves into the technical applications within Building Information Modeling (BIM) systems. Technical insights into how geometric modeling enhances project management and communication in the architecture and construction sectors are examined.

At the core of architectural and civil engineering projects lies the need for design and visualization. Geometric modeling provides architects and engineers with a toolset to bring their concepts to life in a virtual environment. From conceptualizing structures to refining details, architects leverage 3D geometric models to visualize and refine their designs. This enhances the creative process and allows stakeholders to understand the proposed structures better before they materialize.

One of the most transformative applications of geometric modeling in architecture and civil engineering is Building Information Modeling (BIM), which creates and manages digital representations of a building or infrastructure's physical and functional characteristics. Geometric modeling forms the backbone of BIM, enabling the creation of intelligent 3D models that store not just visual data but also crucial information about materials, costs, and timelines.

Structural Analysis and Simulation

Geometric modeling is a cornerstone in conducting structural analysis and simulation in civil engineering. Engineers utilize 3D models to simulate how structures respond to various conditions such as loads, environmental factors, and seismic events. The ability to run these simulations on geometric models ensures that the final structures meet safety standards and performance expectations.

Geometric models serve as the foundation for Finite Element analysis simulations. They are imported into FEA software and divided into finite elements for analysis. Material properties and boundary conditions are used to replicate real-world scenarios accurately. Engineers use software to conduct simulations and assess structural performance. Insights from FEA simulations are used to iteratively refine the geometric model iteratively, enhancing the overall design process.

Urban Planning and Spatial Analysis - Conclusion

In the realm of urban planning, geometric modeling aids in creating detailed representations of entire cityscapes. This includes the modeling of buildings, roads, utilities, and green spaces. With these comprehensive 3D models, urban planners can analyze the spatial relationships between different geometric elements, optimize traffic flow, and make informed decisions about the allocation of resources. Geometric modeling, therefore, becomes a crucial tool in creating sustainable and efficient urban environments.

Effective communication is key in architecture and civil engineering projects, especially with diverse stakeholders. Geometric modeling provides a common visual language that bridges the gap between technical experts, clients, and the general public.

In conclusion, the applications of geometric modeling in architecture and civil engineering are vast and transformative. From initial design concepts to the construction phase and beyond, this technology significantly enhances efficiency, accuracy, and communication, making it an indispensable tool in the modern built environment. As technology advances, the integration of geometric modeling will likely further shape the future of architectural and civil engineering practices.

What's Next? AI Revolution in Engineering!

The next step is a new implementation of Artificial Intelligence related to engineering predictive capabilities.

How is this related to geometric modeling?

The most recent Deep Learning, Computer Vision-based advances are indeed applied to real-time interpretation of CAD software in terms of engineering performance indices.

In practice, while not being a "general AI, " Deep Learning with specific neural networks can predict physical properties associated with object shapes.

All this happens within a very reduced computer time.

In the end, CAD system users can access advanced predictive capabilities without toils and model quicker and better desired industrial shapes. In terms of business, this helps achieve scaling of the simulation solutions from a small circle of experts to the whole demographics of engineers within a corporation.

Enhancing Battery Performance with Immersion Cooling

digital representation and analysis of shapes

What Is Thermal Management? Challenges and Solutions

Battery pack design: maximizing performance and efficiency.

Princeton Shape Retrieval and Analysis Group

Project Overview

Shape representations.

  • Shape Distributions:
  • Reflective Symmetry Descriptors:
  • Spherical Harmonics:
  • Skeletal Graphs:

Query Interfaces

  • 3D Sketches:
  • 2D Sketches:

Princeton Shape Benchmark

Princeton Segmentation Benchmark

  • The Princeton Segmentation Benchmark provides data for quantitative analysis of how people decompose objects into parts and for comparison of automatic mesh segmentation algorithms. To build the benchmark, we recruited eighty people to manually segment surface meshes into functional parts, yielding an average of 11 human-generated segmentations for each of 380 meshes across 19 object categories (shown in the figure above). This data set provides a sampled distribution over ''how humans decompose each mesh into functional parts,'' which we treat as a probabilistic ''ground truth'' (darker lines in the image above show places where more people placed a segmentation boundary). Given this data set, it is possible to analyze properties of the human-generated segmentations to learn about what they have in common with each other (and with computer-generated segmentations) and to compute evaluation metrics that measure how well the human-generated segmentations match computer-generated ones for the same mesh.

digital representation and analysis of shapes

Publications

  • Vladimir G. Kim, Siddhartha Chaudhuri, Leonidas Guibas, and Thomas Funkhouser. Shape2Pose: Human-Centric Shape Analysis , ACM Transactions on Graphics (Proc. SIGGRAPH), August 2014.  
  • Xiaobai Chen. Shape Analysis with Crowd-Sourced Data , Ph.D. Thesis, Princeton University , May 2014.  
  • Vladimir G. Kim, Wilmot Li, Niloy J. Mitra, Siddhartha Chaudhuri, Stephen DiVerdi, and Thomas Funkhouser. Learning Part-based Templates from Large Collections of 3D Shapes , ACM Transactions on Graphics (Proc. SIGGRAPH), July 2013.  
  • Vladimir G. Kim, Wilmot Li, Niloy Mitra, Stephen DiVerdi, and Thomas Funkhouser, Exploring Collections of 3D Models using Fuzzy Correspondences , ACM Transactions on Graphics (Proc. SIGGRAPH) , August 2012  
  • Tianqiang Liu, Vladimir G. Kim, Thomas Funkhouser, Finding Surface Correspondences Using Symmetry Axis Curves , Computer Graphics Forum (Proc. Symposium on Geometry Processing), July 2012  
  • Vladimir G. Kim, Yaron Lipman, Thomas Funkhouser, Symmetry-Guided Texture Synthesis and Manipulation , ACM Transactions on Graphics , 31(3), May 2012.  
  • Doug M. Boyer, Yaron Lipman, Elizabeth St. Clair, Jesus Puente, Biren A. Patel, Thomas Funkhouser, Jukka Jernvall, and Ingrid Daubechies, Algorithms to automatically quantify the geometric similarity of anatomical surfaces, PNAS , October 2011.  
  • Vladimir G. Kim, Yaron Lipman, and Thomas Funkhouser Blended Intrinsic Maps , ACM Transactions on Graphics (Proc. SIGGRAPH), August 2011.  
  • Yaron Lipman, Xiaobai Chen, Ingrid Daubechies, and Thomas Funkhouser Symmetry Factored Embedding and Distance , ACM Transactions on Graphics (SIGGRAPH 2010) , August 2010.  
  • Xiaobai Chen, Abulhair Saparov, Bill Pang, and Thomas Funkhouser, Schelling Points on 3D Surface Meshes , ACM Transactions on Graphics (Proc. SIGGRAPH) , August 2012  
  • Vladimir Kim, Yaron Lipman, Xiaobai Chen, and Thomas Funkhouser Mobius Transformations For Global Intrinsic Symmetry \ Analysis , Computer Graphics Forum (Symposium on Geometry Processing) , July 2010.  
  • Jian Sun, Xiaobai Chen, and Thomas Funkhouser Fuzzy Geodesics and Consistent Sparse Correspondences\ For Deformable Shapes , Computer Graphics Forum (Symposium on Geometry Processing) , July 2010.  
  • Yaron Lipman, Raif Rustamov, and Thomas Funkhouser Biharmonic Distance , ACM Transactions on Graphics , 29(3), June 2010.  
  • Aleksey Golovinskiy and Thomas Funkhouser Min-Cut Based Segmentation of Point Clouds , Search in 3D and Video (S3DV) , September 2009.  
  • Raif Rustamov, Yaron Lipman, and Thomas Funkhouser, Interior Distance Using Barycentric Coordinates , Computer Graphics Forum (Symposium on Geometry Processing), 2009.  
  • Aleksey Golovinskiy, Joshua Podolak, and Thomas Funkhouser, Symmetry-Aware Mesh Processing Mathematics of Surfaces 2009 (invited paper), LNCS 5654, September 2009 (previously Princeton University Tech Report TR-782-07).  
  • Aleksey Golovinskiy, Vladimir Kim, and Thomas Funkhouser, Shape-based Recognition of 3D Point Clouds in Urban Environments International Conference on Computer Vision (ICCV) , September 2009.  
  • Yaron Lipman and Thomas Funkhouser, Mobius Voting for Surface Correspondence ACM Transactions on Graphics (Proc SIGGRAPH) , 28(3), August 2009.  
  • Xiaobai Chen, Aleksey Golovinskiy, and Thomas Funkhouser, A Benchmark for 3D Mesh Segmentation ACM Transactions on Graphics (Proc SIGGRAPH) , 28(3), August 2009.  
  • Aleksey Golovinskiy and Thomas Funkhouser Consistent Segmentation of 3D Models , Computers and Graphics (Shape Modeling International) , Beijing, June 2009.  
  • Aleksey Golovinskiy and Thomas Funkhouser Randomized Cuts for 3D Mesh Analysis , ACM Transactions on Graphics (SIGGRAPH Asia 2008) , December 2008  
  • Jeehyung Lee and Thomas Funkhouser, Sketch-Based Search and Composition of 3D Models , EUROGRAPHICS Workshop on Sketch-Based Interfaces and Modeling , June 2008.  
  • Philip Shilane and Thomas Funkhouser, Distinctive Regions of 3D Surfaces ACM Transactions on Graphics , volume 26, number 2, June 2007.  
  • Joshua Podolak, Philip Shilane, Aleksey Golovinskiy, Szymon Rusinkiewicz, and Thomas Funkhouser, " A Planar-Reflective Symmetry Transform for 3D Shapes ," ACM Transactions on Graphics (SIGGRAPH 2006), 25(3), July 2006.  
  • Thomas Funkhouser and Philip Shilane, " Partial Matching of 3D Shapes with Priority-Driven Search ," Symposium on Geometry Processing , June 2006.  
  • Zafer Barutcuoglu and Christopher DeCoro. " Hierarchical Shape Classification Using Bayesian Aggregation ," Shape Modeling International , June 2006.  
  • Philip Shilane and Thomas Funkhouser, " Selecting Distinctive 3D Shape Descriptors for Similarity Retrieval ," Shape Modeling International , June 2006.  
  • Thomas Funkhouser, Michael Kazhdan, Patrick Min, and Philip Shilane, " Shape-based Retrieval and Analysis of 3D Models ," Communications of the ACM , 48(6):58-64, June 2005.  
  • Patrick Min, Michael Kazhdan, and Thomas Funkhouser, " A Comparison of Text and Shape Matching for Retrieval of Online 3D Models ," European Conference on Digital Libraries , September 2004.  
  • Thomas Funkhouser, Michael Kazhdan, Philip Shilane, Patrick Min, William Kiefer, Ayellet Tal, Szymon Rusinkiewicz, and David Dobkin, " Modeling by Example ," ACM Transactions on Graphics (SIGGRAPH 2004) , Los Angeles, CA, August 2004  
  • Michael Kazhdan, Thomas Funkhouser, and Szymon Rusinkiewicz, " Shape Matching and Anisotropy ," ACM Transactions on Graphics (SIGGRAPH 2004) , Los Angeles, CA, August 2004  
  • Michael Kazhdan, Thomas Funkhouser, and Szymon Rusinkiewicz, " Symmetry Descriptors and 3D Shape Matching ", Symposium on Geometry Processing, July 2004.  
  • Michael Kazhdan, " Shape Representations and Algorithms for 3D Model Retrieval ," PhD Thesis , Princeton University, June 2004.  
  • Philip Shilane, Patrick Min, Michael Kazhdan, and Thomas Funkhouser, " The Princeton Shape Benchmark ," Shape Modeling International , Genova, Italy, June 2004  
  • Patrick Min, " A 3D Model Search Engine ," Ph.D. Thesis, January 2004  
  • Michael Kazhdan, Bernard Chazelle, David Dobkin, Thomas Funkhouser, and Szymon Rusinkiewicz, " A Reflective Symmetry Descriptor for 3D Models ," Algorithmica , 38(2), November 2003, pp. 201-225  
  • Michael Kazhdan, Thomas Funkhouser, and Szymon Rusinkiewicz, " Rotation Invariant Spherical Harmonic Representation of 3D Shape Descriptors ," Symposium on Geometry Processing , 2003  
  • Patrick Min, John A. Halderman, Michael Kazhdan, and Thomas A. Funkhouser, " Early Experiences with a 3D Model Search Engine ," Web3D Symposium , pp. 7-18, Saint Malo, France, March 2003  
  • Thomas Funkhouser, Patrick Min, Michael Kazhdan, Joyce Chen, Alex Halderman, David Dobkin, and David Jacobs, " A Search Engine for 3D Models ," ACM Transactions on Graphics , 22(1), pp. 83-105, January 2003  
  • Robert Osada, Thomas Funkhouser, Bernard Chazelle, and David Dobkin, " Shape Distributions ," ACM Transactions on Graphics , 21(4), pp. 807-832, October 2002  
  • Michael Kazhdan, Bernard Chazelle, David Dobkin, Adam Finkelstein, and Thomas Funkhouser, " A Reflective Symmetry Descriptor ," European Conference on Computer Vision (ECCV) , May, 2002  
  • Michael Kazhdan and Thomas Funkhouser, " Harmonic 3D Shape Matching " SIGGRAPH 2002 Technical Sketches, p. 191, July, 2002.  
  • Patrick Min, Joyce Chen, and Thomas Funkhouser, " A 2D Sketch Interface for a 3D Model Search Engine ," SIGGRAPH 2002 Technical Sketches, p. 138, July, 2002  
  • Robert Osada, Thomas Funkhouser, Bernard Chazelle, and David Dobkin, " Matching 3D Models with Shape Distributions ," Shape Modeling International , Genova, Italy, May 2001  
  • Aleksey Golovinskiy, Joshua Podolak, and Thomas Funkhouser, Symmetry-Aware Mesh Processing 45MB AVI file, Princeton University, 2007.  
  • Joshua Podolak, Philip Shilane, Aleksey Golovinskiy, Szymon Rusinkiewicz, and Thomas Funkhouser. " A Planar-Reflective Symmetry Transform for 3D Shapes 30MB AVI file, SIGGRAPH 2006.  
  • Thomas Funkhouser, Michael Kazhdan, Philip Shilane, Patrick Min, William Kiefer, Ayellet Tal, Szymon Rusinkiewicz, and David Dobkin, " Modeling by Example ," 74MB AVI file, SIGGRAPH 2004.  
  • Thomas Funkhouser, Patrick Min, Michael Kazhdan, Joyce Chen, Alex Halderman, David Dobkin, and David Jacobs, " A Search Engine for 3D Models ," 72MB AVI file, ACM TOG 2003.  

55:148 Digital Image Processing 55:247 Image Analysis and Understanding

Chapter 6, part iii shape representation and description: region-based shape representation and description.

Simple scalar region descriptors Moments Convex hull Graph representation based on region skeleton Region decomposition Region neighborhood graphs

Region-based shape representation and description

  • Simple scalar region descriptors
  • A large group of shape description techniques is represented by heuristic approaches which yield acceptable results in description of simple shapes.
  • Heuristic region descriptors:
  • rectangularity,
  • elongatedness,
  • compactness,
  • These descriptors cannot be used for region reconstruction and do not work for more complex shapes.
  • Procedures based on region decomposition into smaller and simpler subregions must be applied to describe more complicated regions, then subregions can be described separately using heuristic approaches.
  • Area is given by the number of pixels of which the region consists.
  • The real area of each pixel may be taken into consideration to get the real size of a region.
  • If an image is represented as a rectangular raster, simple counting of region pixels will provide its area.
  • If the image is represented by a quadtree, then:
  • The region can also be represented by n polygon vertices

the sign of the sum represents the polygon orientation.

  • If the region is represented by the (anti-clockwise) Freeman chain code the following algorithm provides the area
  • Euler's number
  • (sometimes called Genus or the Euler-Poincare characteristic ) describes a simple topologically invariant property of the object.
  • S is the number of contiguous parts of an object and N is the number of holes in the object (an object can consist of more than one region).
  • Projections
  • Horizontal and vertical region projections
  • Eccentricity
  • The simplest is the ratio of major and minor axes of an object.
  • Elongatedness
  • A ratio between the length and width of the region bounding rectangle.
  • This criterion cannot succeed in curved regions, for which the evaluation of elongatedness must be based on maximum region thickness.
  • Elongatedness can be evaluated as a ratio of the region area and the square of its thickness.
  • The maximum region thickness (holes must be filled if present) can be determined as the number d of erosion steps that may be applied before the region totally disappears.
  • Rectangularity
  • Let F_k be the ratio of region area and the area of a bounding rectangle, the rectangle having the direction k. The rectangle direction is turned in discrete steps as before, and rectangularity measured as a maximum of this ratio F_k
  • Direction is a property which makes sense in elongated regions only.
  • If the region is elongated, direction is the direction of the longer side of a minimum bounding rectangle.
  • If the shape moments are known, the direction \theta can be computed as
  • Elongatedness and rectangularity are independent of linear transformations -- translation, rotation, and scaling.
  • Direction is independent on all linear transformations which do not include rotation.
  • Mutual direction of two rotating objects is rotation invariant.
  • Compactness
  • Compactness is independent of linear transformations
  • The most compact region in a Euclidean space is a circle.
  • Compactness assumes values in the interval [1,infty) in digital images if the boundary is defined as an inner boundary, while using the outer boundary, compactness assumes values in the interval [16,infty).
  • Independence from linear transformations is gained only if an outer boundary representation is used.
  • Region moment representations interpret a normalized gray level image function as a probability density of a 2D random variable.
  • Properties of this random variable can be described using statistical characteristics - moments .
  • Assuming that non-zero pixel values represent regions, moments can be used for binary or gray level region description.
  • A moment of order (p+q) is dependent on scaling, translation, rotation, and even on gray level transformations and is given by
  • In digitized images we evaluate sums
  • where x,y,i,j are the region point co-ordinates (pixel co-ordinates in digitized images).
  • Translation invariance can be achieved if we use the central moments
  • or in digitized images
  • where x_c, y_c are the co-ordinates of the region's centroid
  • In the binary case, m_00 represents the region area.
  • Scale invariant features can also be found in scaled central moments

and normalized unscaled central moments

  • Rotation invariance can be achieved if the co-ordinate system is chosen such that mu_11 = 0.
  • A less general form of invariance is given by seven rotation, translation, and scale invariant moment characteristics
  • While the seven moment characteristics presented above were shown to be useful, they are only invariant to translation, rotation, and scaling.
  • A complete set of four affine moment invariants derived from second- and third-order moments is
  • All moment characteristics are dependent on the linear gray level transformations of regions; to describe region shape properties, we work with binary image data (f(i,j)=1 in region pixels) and dependence on the linear gray level transform disappears.
  • Moment characteristics can be used in shape description even if the region is represented by its boundary.
  • A closed boundary is characterized by an ordered sequence z(i) that represents the Euclidean distance between the centroid and all N boundary pixels of the digitized shape.
  • No extra processing is required for shapes having spiral or concave contours.
  • Translation, rotation, and scale invariant one-dimensional normalized contour sequence moments can be estimated as
  • The r-th normalized contour sequence moment and normalized central contour sequence moment are defined as
  • Less noise-sensitive results can be obtained from the following shape descriptors

Convex hull

  • A region R is convex if and only if for any two points x_1, x_2 from R, the whole line segment defined by its end-points x_1, x_2 is inside the region R.
  • The convex hull of a region is the smallest convex region H which satisfies the condition R is a subset of H.
  • The convex hull has some special properties in digital data which do not exist in the continuous case. For instance, concave parts can appear and disappear in digital data due to rotation, and therefore the convex hull is not rotation invariant in digital space.
  • The convex hull can be used to describe region shape properties and can be used to build a tree structure of region concavity.
  • A discrete convex hull can be defined by the following algorithm which may also be used for convex hull construction.
  • This algorithm has complexity O(n^2) and is presented here as an intuitive way of detecting the convex hull.
  • More efficient algorithms exist, especially if the object is defined by an ordered sequence of n vertices representing a polygonal boundary of the object.
  • If the polygon P is a simple polygon (self-non-intersecting polygon) which is always the case in a polygonal representation of object borders, the convex hull may be found in linear time O(n).
  • In the past two decades, many linear-time convex hull detection algorithms have been published, however more than half of them were later discovered to be incorrect with counter-examples published.
  • The simplest correct convex hull algorithm was developed by Melkman and is now discussed further.
  • Let the polygon for which the convex hull is to be determined be a simple polygon P = v_1, v_2, ... v_n and let the vertices be processed in this order.
  • For any three vertices x,y,z in an ordered sequence, a directional function delta may be evaluated
  • The main data structure H is a list of vertices (deque) of polygonal vertices already processed.
  • The current contents of H represents the convex hull of the currently processed part of the polygon, and after the detection is completed, the convex hull is stored in this data structure.
  • Therefore, H always represents a closed polygonal curve, H={d_b, ... ,d_t} where d_b points to the bottom of the list and d_t points to its top.
  • Note that d_b and d_t always refer to the same vertex simultaneously representing the first and the last vertex of the closed polygon.
  • Main ideas of the algorithm:
  • The first three vertices A,B,C from the sequence P form a triangle (if not collinear) and this triangle represents a convex hull of the first three vertices.
  • The next vertex D in the sequence is then tested for being located inside or outside the current convex hull.
  • If D is located inside, the current convex hull does not change.
  • If D is outside of the current convex hull, it must become a new convex hull vertex and based on the current convex hull shape, either none, one, or several vertices must be removed from the current convex hull.
  • This process is repeated for all remaining vertices in the sequence P.
  • The variable v refers to the input vertex under consideration, and the following operations are defined:
  • The algorithm is then;
  • The algorithm as presented may be difficult to follow, however, a less formal version would be impossible to implement.
  • The following example makes the algorithm more understandable.
  • A new vertex should be entered from P, however there is no unprocessed vertex in the sequence P and the convex hull generating process stops.
  • The resulting convex hull is defined by the sequence H={d_b, ... ,d_t}={D,C,A,D} which represents a polygon DCAD, always in the clockwise direction.
  • A region concavity tree is generated recursively during the construction of a convex hull.
  • A convex hull of the whole region is constructed first, and convex hulls of concave residua are found next.
  • The resulting convex hulls of concave residua of the regions from previous steps are searched until no concave residuum exists.
  • The resulting tree is a shape representation of the region.
  • Objects are represented by a planar graph with nodes representing subregions resulting from region decomposition, and region shape is then described by the graph properties.
  • There are two general approaches to acquiring a graph of subregions:
  • The first one is region thinning leading to the region skeleton , which can be described by a graph.
  • The second option starts with the region decomposition into subregions, which are then represented by nodes while arcs represent neighborhood relations of subregions.
  • Graphical representation of regions has many advantages; the resulting graphs
  • are translation and rotation invariant; position and rotation can be included in the graph definition
  • are insensitive to small changes in shape
  • are highly invariant with respect to region magnitude
  • generate a representation which is understandable
  • can easily be used to obtain the information-bearing features of the graph
  • are suitable for syntactic recognition
  • Graph representation based on region skeleton
  • This method corresponds significantly curving points of a region boundary to graph nodes.
  • The main disadvantage of boundary-based description methods is that geometrically close points can be far away from one another when the boundary is described - graphical representation methods overcome this disadvantage.
  • The region graph is based on the region skeleton, and the first step is the skeleton construction.
  • There are four basic approaches to skeleton construction:
  • thinning - iterative removal of region boundary pixels
  • wave propagation from the boundary
  • detection of local maxima in the distance-transformed image of the region
  • analytical methods
  • Most thinning procedures repeatedly remove boundary elements until a pixel set with maximum thickness of one or two is found. The following algorithm constructs a skeleton of maximum thickness two.
  • Steps of this algorithm are illustrated in the next Figure.
  • If there are skeleton segments which have a thickness of two, one extra step can be added to reduce those to a thickness of one, although care must be taken not to break the skeleton connectivity.
  • Thinning is generally a time-consuming process, although sometimes it is not necessary to look for a skeleton, and one side of a parallel boundary can be used for skeleton-like region representation.
  • Mathematical morphology is a powerful tool used to find the region skeleton.
  • Thinning procedures often use a medial axis transform to construct a region skeleton.
  • Under the medial axis definition, the skeleton is the set of all region points which have the same minimum distance from the region boundary for at least two separate boundary points.
  • Such a skeleton can be constructed using a distance transform which assigns a value to each region pixel representing its (minimum) distance from the region's boundary.
  • The skeleton can be determined as a set of pixels whose distance from the region's border is locally maximal.
  • Every skeleton element can be accompanied by information about its distance from the boundary -- this gives the potential to reconstruct a region as an envelope curve of circles with center points at skeleton elements and radii corresponding to the stored distance values.
  • Small changes in the boundary may cause serious changes in the skeleton.
  • This sensitivity can be removed by first representing the region as a polygon, then constructing the skeleton.
  • Boundary noise removal can be absorbed into the polygon construction.
  • A multi-resolution approach to skeleton construction may also result in decreased sensitivity to boundary noise.
  • Similarly, the approach using the Marr-Hildreth edge detector with varying smoothing parameter facilitates scale-based representation of the region's skeleton.
  • Skeleton construction algorithms do not result in graphs but the transformation from skeletons to graphs is relatively straightforward.
  • Consider first the medial axis skeleton, and assume that a minimum radius circle has been drawn from each point of the skeleton which has at least one point common with a region boundary.
  • Let contact be each contiguous subset of the circle which is common to the circle and to the boundary.
  • If a circle drawn from its center A has one contact only, A is a skeleton end-point.
  • If the point A has two contacts, it is a normal skeleton point.
  • If A has three or more contacts, the point A is a skeleton node-point.
  • It can be seen that boundary points of high curvature have the main influence on the graph.
  • They are represented by graph nodes, and therefore influence the graph structure.
  • If other than medial axis skeletons are used for graph construction, end-points can be defined as skeleton points having just one skeleton neighbor, normal-points as having two skeleton neighbors, and node-points as having at least three skeleton neighbors.
  • It is no longer true that node-points are never neighbors and additional conditions must be used to decide when node-points should be represented as nodes in a graph and when they should not.
  • Region decomposition
  • The decomposition approach is based on the idea that shape recognition is a hierarchical process.
  • Shape primitives are defined at the lower level, primitives being the simplest elements which form the region.
  • A graph is constructed at the higher level - nodes result from primitives, arcs describe the mutual primitive relations.
  • Convex sets of pixels are one example of simple shape primitives.
  • The solution to the decomposition problem consists of two main steps:
  • The first step is to segment a region into simpler subregions (primitives) and the second is the analysis of primitives.
  • Primitives are simple enough to be successfully described using simple scalar shape properties.
  • If subregions are represented by polygons, graph nodes bear the following information;
  • Node type representing primary subregion or kernel.
  • Number of vertices of the subregion represented by the node.
  • Area of the subregion represented by the node.
  • Main axis direction of the subregion represented by the node.
  • Center of gravity of the subregion represented by the node.
  • If a graph is derived using attributes 1-4, the final description is translation invariant.
  • A graph derived from attributes 1-3 is translation and rotation invariant.
  • Derivation using the first two attributes results in a description which is size invariant in addition to possessing translation and rotation invariance.
  • Any time a region decomposition into subregions or an image decomposition into regions is available, the region or image can be represented by a region neighborhood graph (the region adjacency graph being a special case).
  • This graph represents every region as a graph node, and nodes of neighboring regions are connected by edges.
  • A region neighborhood graph can be constructed from a quadtree image representation, from run-length encoded image data, etc.
  • Very often, the relative position of two regions can be used in the description process -- for example, a region A may be positioned to the left of a region B, or above B, or close to B, or a region C may lie between regions A and B, etc.
  • We know the meaning of all of the given relations if A,B,C are points, but, with the exception of the relation to be close , they can become ambiguous if A,B,C are regions.
  • For instance, human observers are generally satisfied with the definition:
  • The center of gravity of A must be positioned to the left of the leftmost point of B and (logical AND) the rightmost pixel of A must be left of the rightmost pixel of B

Last Modified: February 3, 1997

  • Statistics and Analysis of Shapes
  • January 2007
  • ISBN: 978-0-8176-4376-8

Hamid Krim at North Carolina State University

  • North Carolina State University
  • This person is not on ResearchGate, or hasn't claimed this research yet.

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations

Chapters (15)

Fig. 1. Level lines and T-junction. Depending on the grey level configuration between shapes and background, level lines may or may not follow the objects boundary. In any case, junctions appear where two level lines separate. Here, there are two kinds of level lines: the occluded circle and the shape composed of the union of the circle and the square. The square itself may be retrieved by difference. 

  • Matthew Hudelson

Bala Krishnamoorthy

  • Kevin R. Vixie

Michael Biehler

  • Jianjun Shi

Masoud Asgharian

  • Mina Mirshahi

Vahid Partovi Nia

  • Tom Needham

Maria Velaora

  • Vesna Zeljkovic

Rafal Doroz

  • J R STAT SOC C-APPL
  • Thomas Hotz

Stephan F. Huckemann

  • Branislav Sloboda
  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up

IEEE Account

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

Navigation Menu

Search code, repositories, users, issues, pull requests..., provide feedback.

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly.

To see all available qualifiers, see our documentation .

  • Notifications You must be signed in to change notification settings

Latest commit

File metadata and controls.

Explore Topics

  • Investor Relations
  • GK8 Custody
  • Galaxy Ventures
  • Galaxy Interactive
  • Invest in Digital Assets

Explore Solutions

  • Global Markets
  • Asset Management
  • Digital Infrastructure Solutions

Explore Insights

Research • August 21, 2024

150 Days After Dencun

Exploring the economic impact of blobs on ethereum and rollups.

Zack Pokorny

Research Analyst

  • Share on Twitter
  • Share on Facebook
  • Share on Linkedin

Key Takeaways

There have been 2,225,958 blobs purchased at an average cost of $1.59/ blob and 1,104,315 blob carrying Layer 1 transactions at an average cost of $5.22/ transaction in the 150-day period following the implementation of EIP-4844 (as of August 10, 2024). In total, Ethereum has generated 2,692.39 ETH and $9,318,794 in revenue from blobs. 2,408.41 ETH or 89.45% of total fee revenue from blobs has been burned; the rest went to validators in the form of priority tips.

Rollups have purchased approximately 285 gigabytes of blob data in total and have only used about 76% of their capacity. Blobs have a fixed size of 128KB and a maximum of 6 blobs can be processed in an Ethereum block. The data contained in each blob is ephemeral and automatically pruned from most Ethereum nodes after a period of roughly two weeks.

Rollups have spent a total of $3,549,430 on blobs, which puts the cost at $16,473 per gigabyte used and $12,458 per gigabyte purchased. These figures span the period starting March 13, 2024 to August 10, 2024.

Rollup costs came down substantially, with Arbitrum, OP Mainnet, Base, zkSync, Linea, and Scroll paying a combined average of $556.4k in operating costs daily post-Dencun. This compares to a daily average of $1.07m in the 150 days leading into Dencun under the use of calldata, an alternative way to store arbitrary data on Ethereum.

Rollup margins improved on a relative basis, with optimistic rollup margins (using Base, OP Mainnet, and Arbitrum as a proxy) strengthening from 22.65% in the 150 days leading into Dencun to 92.3% in the 150 days after Dencun; and zero-knowledge rollup margins (using zkSync, Scroll, and Linea as a proxy) expanding from 27.27% before Dencun to 66.7% after.

Rollup margins in absolute terms have improved after Dencun despite fee revenues dropping 42% from pre-Dencun daily values. Rollups are earning more bottom-line income than they were before Dencun.

Activity on the leading L2s picked up immediately post-Dencun. However, the rise in transaction counts has been met with rising failure rates. The majority of the failed transactions stem from high activity addresses, likely bots. Low fees on L2s could be driving increased bot activity. Average users that are not sending high volumes of transactions are experiencing failed transaction rates at a level only slightly more elevated than pre-Dencun levels.

Ethereum is seeing significantly less revenue generated and ETH supply burned post-Dencun. Total revenue earned is 69% below that of the average 150-day rolling sum before the upgrade; ETH burned is 84% below that of the average 150-day rolling sum before the upgrade.

Key Definitions

Blob – A binary large object or “blob” is a temporary data storage space for rollup data on Ethereum’s consensus client. Blobs were introduced as part of EIP-4844. The storage can be used for any kind of data just like the previously relied on calldata space before EIP-4844. However, Ethereum developers intend for blobs to be used by rollups for storing transaction data. There is a target rate of three blobs per Ethereum block and max number of six. “Blob sidecar” is used in reference to the object wrappers for blobs that carry transaction data in a block. “Blobdata” is used in reference to the data that is stored in a blob.

Blob Carrying Transaction – Blob carrying transactions, or “type-3” transactions, are EIP-4844 transactions that include a reference to a blob, but not the blob itself. The blobs are gossiped through a consensus client blob sidecar and are not available to the execution client. Blob carrying transactions are handled on Ethereum’s execution layer (Layer 1), and include the fees associated with blobs in addition to the typical base and priority fees of Ethereum transactions.

Blob Capacity – the maximum amount of blobdata that can be stored in a blob. Each blob can hold up to 128 KB of data. Blob purchasers must pay for the full capacity of the blob independent of if they fill it or not.

Blob Capacity Used – the share of each blob’s 128 KB capacity a rollup actually fills with batched transaction data.

Calldata – a dedicated data storage space attached to every Ethereum transaction. This dedicated space can be used to store any kind of data and was commonly used by rollups as the space to store their transaction data before EIP-4844. Some rollups still use calldata after EIP-4844.

Batch – a bundle of Layer 2 transactions that are “rolled up” together and submitted as a single transaction to Ethereum. Bundling transactions into a batch is how Layer 2 (L2) rollups reduce fees for Ethereum users. Batches are mostly stored in blobs today and in transaction calldata previously.

Ethereum Execution Layer – also referred to as Ethereum Layer 1 (L1), the execution layer is the part of the Ethereum network that processes transactions and executes smart contracts. It also contains the EVM, the execution engine of Ethereum that enforces rules and pricing for all on-chain operations.

Ethereum Consensus Layer – the part of Ethereum that implements the proof-of-stake (PoS) consensus algorithm, which enables the network to achieve agreement based on validated data from the execution client. The consensus layer is where blobdata is stored.

Background on EIP-4844

Ethereum’s Dencun upgrade was successfully activated at 1:55pm UTC on March 13, 2024 (beacon slot 8,626,178 and epoch 269,568). In the upgrade’s collection of nine Ethereum Improvement Proposals ( EIPs ) is EIP-4844 . This EIP offers a solution to reduce operating costs for Ethereum rollups through the use of blobs and type-3 transactions . Blobs and type-3 transactions serve as cheaper data storage than calldata and type- 2 transactions for rollups.

Unlike calldata which is stored on Ethereum’s execution layer, Blobdata is temporarily stored on the consensus layer and made inaccessible for smart contracts to query through the Ethereum Virtual Machine (EVM). This reduces the computational burden, and consequentially the cost, of storing rollup batch data and posting it to Ethereum by reducing rollups’ reliance on calldata and Ethereum’s execution layer. Blobdata is then posted to Ethereum through type-3 transactions which contain references to the blobs that are stored on the consensus layer, but not the blobs themselves or the data they contain.

The total fee for storing rollup data in the calldata of a transaction and posting them with type-2 transactions is calculated as:

(Base Fee * Calldata Gas Used) + (Priority Fee * Calldata Gas Used)

By comparison, the total fee associated with type-3 transactions containing blobdata on Ethereum is calculated as:

( Blob Base Fee * Blob Gas Used) + (Base Fee * Calldata Gas Used) + (Priority Fee * Calldata Gas Used)

Despite the addition of blob base fees to the total fee calculation, type-3 transactions are generally cheaper than type 2 transactions because less call data gas is used in type-3 transactions and the fees associated with blob gas is generally cheaper than the fees associated with calldata gas.

Blobs have their own fee market independent of that of any Ethereum L1 transaction type. The blob fee market has its own parameters for setting base fees that are only influenced by demand for blobs in the prior block. The activity that takes place on Ethereum L1 (e.g. users swapping on dexes) does not have an impact on the cost of blobs. Activity on Ethereum L1 does impact the base fee of blob-carrying transactions but it does not impact the blob base fee.

The base fee for blobs in its independent market is set to 1 wei and scales up as demand for them climbs above the target number per block towards the maximum. In units of gas, a single blob requires 131,072 gas. At an ETH price of $3,000, this means the base fee for a blob could be as low as 0.000000000000131072 ETH (12 zeros) or $0.00000000039322 (nine zeros). So, the first leg of the type-3 transaction fee calculation above can be effectively free at times.

This report examines the state of Ethereum blob use from the storage and posting perspectives on both the consensus and execution clients, and EIP-4844’s impact on the economics of rollups. It does not include any analysis on the other eight EIPs included in the Dencun upgrade.

Blobs on Ethereum’s Consensus Layer

Starting with the blob landscape from the perspective of Ethereum’s consensus layer, this section includes information about the number of blobs purchased, the total spends by rollups on consensus layer blobs, and the amount of blob capacity purchased and used.

The table below offers a high-level overview of the top 10 Ethereum rollups by number of blobs purchased. Cumulatively, they purchased 1,939,657 blobs (248.2 gigabytes of blob capacity purchased representing 87% of total blob demand) and sunk $2.1m (603.6 ETH) in blob spend in the 150-day period since EIP-4844 activation. Paradex, StarkNet, and Arbitrum used the greatest share of blob capacity purchased at an average usage rate of 95.23%. Base has spent the most on blobs over this time, spending $811k or 232.5 ETH across 619,516 blobs.

The cost of a single blob is calculated in units of ETH with the following formula, where Blob Gas Used is 131,072 gas:

( Blob Base Fee * Blob Gas Used) / 1e18

Blob costs can also be calculated in USD by multiplying the outcome of this formula by the price of ETH at the time of blob inclusion in an Ethereum block. This is the price of ETH when the blob carrying transactions are executed on-chain. As explained in the previous section of this report, the total cost of using blobs is the sum of all three components of the type-3 transaction fee calculation above. The cost of a blob itself is only the first leg of the total transaction fee calculation. The other two values tied to the cost of the blob carrying transactions will be covered in more detail in the next section of this report.

Blobs have seen consistent demand from rollups between 15,000 and 20,000 blobs per day since May 28, 2024. This date is significant as it marked the launch of Taiko, a based rollup. Based rollups are unique in that transactions on them aren’t confirmed until they are confirmed on Ethereum L1. This requires them to continuously purchase blobs to keep the chain functioning. They differ from optimistic and zero-knowledge rollups which offer soft confirmations of transactions on the rollup and then confirms them on the L1 at a future time. This allows for more flexibility by the sequencer on the number of blobs purchased and L1 transaction frequency. Despite being a relatively new rollup, Taiko ranks as the third largest spender for blobs on the consensus layer side and the largest spender overall for blob carry transactions on the execution layer side.

Base and Taiko are the top two biggest daily purchasers of blobs, averaging a combined 8,667 blobs purchased per day (47.4% of all demand for blobs) since June 1, 2024. Base averaged 5,094.5 blobs purchased per day and Taiko 3,572.5 over this period. Despite launching 75 days after the introduction of blobs, Taiko is still one of the top three rollups by number of blobs purchased since Dencun.

From a data demand perspective, rollups have been consistently purchasing between 2 and 2.5 gigabytes of blobs per day since June 1, 2024. However, they have only been filling the blobs they purchase with 1.5 to 2 gigabytes of data. This highlights that rollups have only been using 71.91% of the capacity they have been paying for over this period.

June 20, 2024 and June 21, 2024 marked the two highest days of blobdata usage at 2.66 and 2.75 gigabytes respectively. June 20, 2024 was the Layer Zero airdrop on Arbitrum , which sent activity on Arbitrum soaring. The period between March 27, 2024 and April 3, 2024 marked the greatest demand for blob capacity at around 2.7 gigabytes per day. Blobscriptions were popular over this period, which generated artificial demand for blobs. Blobscriptions allow users to embed arbitrary data into blobs that is unrelated to their intended use of carrying rollup transaction data.

In the 150 days following Dencun, there has been an average of 21.1% of daily unused blob capacity. The chart below highlights the composition of blob capacity on a daily basis between capacity filled by rollups and unused capacity.

There have been nine noteworthy days when rollup blob spends exceeded $100k per day. These days were marked by the Blobscription mania of late March to early April and the day of the Layer Zero airdrop on Arbitrum on June 20. These nine days combined for $3,542,579 in total blobs spend from rollups at an average daily spend of $394k per day (815,714% higher than the average of all other days). Moreover, these nine days account for 99.8% of total blob spend since Dencun went live of $3,549,430. Excluding these outliers, rollups spend a combined average of just $48.25 per day on blobs.

Also note, these days were marked by an average of 2.9 blobs per Ethereum block, which is 52% below the max number of blobs per block of six, 4% below the target number of three, and 40% higher than the daily average number of blobs per block since EIP-4844 launched of 2.06. This highlights costs go parabolic when demand for blobs consistently meets or exceeds the target rate.

Blob Costs Per Gigabyte

Another way to measure the cost of blobs is based on the amount of space purchased and ultimately filled by rollups. Using data on rollups’ daily blob spend and blob usage, we can get a sense of how much it costs to purchase and fill 1GB of blob space. The chart below takes the daily blob spend of rollups and sets it over the daily amount of blob capacity demanded and the amount of blob capacity used to measure the cost per gigabyte of blobs.

In the height of the Blobscription mania, the cost of blobs reached $503k per gigabyte of blob capacity used and $207k per gigabyte of blob capacity purchased. The day of the Layer Zero airdrop on Arbitrum saw blob costs reach $410k per gigabyte of capacity used and $330k per gigabyte of capacity purchased. Note, the per unit cost of capacity used is greater than the per unit cost of capacity purchased because rollups did not use the full capacity of blobs in aggregate on these days.

Through the Blobscription mania, blob capacity used averaged just 50%. At times, up to 63% of blob capacity purchased was left unused by rollups. During the Layer Zero airdrop day on June 20, 19.5% of blob capacity went unused by rollups.

In total, rollups have spent $3.55m and 1,020 ETH on consensus layer blob spend in the 150 days since Dencun went live. ETH spent on blobs is removed from circulation, like how base fees of transactions executed on Ethereum’s execution layer are burned. The values below represent the amount of ETH removed from circulating supply due to consensus layer blobdata activity. It does not reflect the amount of ETH burned through blob carrying transactions on the execution layer – more on this will be covered in the next section of the report.

In aggregate, since the introduction of EIP 4844, rollups have spent $12,458 per gigabyte of blob space purchased and $16,473 per gigabyte of blob space used.

In the next section of the report, we analyze the blob landscape from the perspective of Ethereum’s execution layer. This includes components like the number of blob carrying transactions executed, the cost at which they were executed for, and the average number of blobs per blob carrying transaction. Note, these values exclude the cost of blobdata in the blob carrying transactions and focus on the values from the last two components of the total blob fee calculation, which are:

Analysis on the cost differences between type-3 transactions including blobdata fees vs. type-2 transactions with rollup transactions written to calldata will be shared later in this report.

Blob Transactions on Ethereum’s Execution Layer

The table below offers a high-level overview of the top 10 Ethereum rollups by the amount of ETH spent on blob carrying transactions on Ethereum’s execution layer. Taiko has spent the most on blob carrying transactions by a substantial margin, sinking 631.67 ETH that resulted in 423.58 ETH being burned from base fees ($2.2m total). zkSync has spent the most on blob carrying transactions on a per transaction basis at 0.0045672 ETH per transaction ($15.78 per transaction).

Rollups have been consistently executing 8,000 to 10,000 blob carrying transactions per day since May 28, 2024 at an average cadence of 2.02 blobs per Ethereum transaction (total number of blobs purchased / total number of blob carrying transactions). Taiko posts the most blob carrying transactions to Ethereum daily, averaging 3,558 transactions per day.

Blob carrying transaction spend across all rollups has averaged $34.1k daily since the introduction of blobs, and $41.8k daily since Taiko launched on May 27. It’s important to note that throughout this period gas prices for all Ethereum transactions have been trending downwards. Average daily gas price used was at 3.5 gwei as of August 10, 2024, which is 67.2% below the 356-day daily average and 94.5% below the average gas price on March 13, 2024. In total, rollups have spent $5.77m on blob carrying transactions since Dencun went live.

Blob-Carrying Transaction Spend by Fee Type

The majority of the execution layer costs paid have been base fees, which represent 83% of the total fees paid. Base fees on Ethereum are burned and permanently removed from the circulating supply, while priority fees are paid to validators as “tips.” Since the launch of Dencun, validators have received $974.8k in priority fees from rollups. Taiko has paid the most priority fees at $719.7k. This makes up 74% of total priority fees paid by all rollups. Of Taiko’s total spend on execution layer blob costs, 33% is spent on priority fees. It is paying a greater portion of its total execution layer blob costs in priority fees than all other rollups. StarkNet spends the least on priority fees. Of StarkNet’s total spend on execution layer blob costs, 0.49% is spent on priority fees.

Measuring base and priority fees paid in ETH terms tells us precisely how much ETH has been burned through blob carrying transactions. In total, 1,389 ETH has been burned through the execution of blob carrying execution layer transactions. 283.97 ETH in priority fees has been paid to validators as tips.

More on ETH burns and revenue captured by validators will be covered in the following section of this report.

Impact on Ethereum Validators and ETH Supply

Due to the introduction of blobs, there is less total revenue generated on Ethereum and ETH burned from rollup activities. The following analysis highlights 1) the change in ETH burned, 2) revenue paid directly to validators, and 3) total revenue captured by Ethereum before and after the implementation of EIP-4844.

To do this, we compare the rolling 150-day sums in each of these data points leading into EIP-4844 to the outright sum of the same metric 150-days after EIP-4844. The 150-day rolling sums begin on January 1, 2022, so it captures pre-Dencun values from as early August 5, 2021. The analysis captures data from 24 different rollups. As mentioned earlier, the blob and base fees of type-3 transactions are burned, and priority fees are paid as tip revenue to validators. Pre-Dencun values exclude revenue contributions and ETH burns from zero-knowledge (ZK) proofs, which is a type of data stored on Ethereum by ZK rollups. Our analysis only includes protocol revenue and burns from transaction batch commits by optimistic and ZK rollups. Note, pre-Dencun values stop on March 13, 2024 (the day EIP-4844 went live).

The chart below compares the total amount of ETH burned in the 150-day period following the introduction of blobs to the historical 150-day rolling sum of ETH burned from rollup calldata batch commits pre-Dencun. While calldata is still relied on by some rollups post-Dencun, it is used by rollups to a much lesser extent . An estimated 39.69 ETH in Ethereum protocol fee revenue was generated in the trailing 30-day period ending August 10, 2024 from calldata usage by rollups, compared to 13,163 ETH in the 30-day period leading into Dencun.

The amount of ETH burned from the base and blob fees of type-3 transactions following Dencun is lower than any of the 150-day rolling sums of data posting under the use of calldata since January 1, 2022. 2,408 ETH has been burned since March 13, 2024 compared to a minimum 150-day rolling sum of 3,286 ETH leading up to Dencun. The average 150-day rolling sum of ETH burned under calldata batch posting up to EIP-4844 was 15,052 ETH. This shows that significantly less ETH is being burned from rollup transaction batch commits on Ethereum after the activation of EIP-4844.

The next chart highlights the value paid directly to validators before and after EIP-4844. In total, $974,876 has been paid to validators by way of type-3 transaction priority fees. This is a level seen in May 2022 and June 2023 using the 150-day rolling sum of priority fees from calldata batch commits before the use of blobs. Calldata batch commits generated an average 150-day rolling sum of $1.196m in priority fees under the use of calldata. Validators are earning less daily through priority fees from type-3 transactions containing blobdata than type-2 transactions containing calldata.

Lastly, we can compare the total revenue earned from blobs to the total revenue earned from calldata batch commits.

The calculation for the total revenue earned from blobs is:

( Blob Base Fee * Blob Gas Used) + (Base Fee * Calldata Gas Used) + (Priority Fee * Calldata Gas Used­)

The calculation for the total revenue earned from calldata batch commits is:

(Base Fee * Calldata Gas Used) + (Priority Fee * Calldata Gas Used­)

Ethereum has earned $9,318,794 in total revenue from blobdata and type-3 transaction base and priority fees, compared to an average 150-day rolling sum of $29.92m in total revenue from type-2 transactions and the use of calldata.

Contributions to L1 by Rollup Type

The table below offers a breakdown of the top 25 rollups used in the above analysis by type and their respective contributions to Ethereum protocol fee revenues post-Dencun. In total, we evaluated 18 optimistic rollups, 6 zero-knowledge rollups, and 1 based rollup.

Notably, the only based rollup in our analysis, Taiko, contributed 74% of all tips paid to validators and more than a quarter of all ETH burned under EIP-4844. Together, these 25 rollups make up 81% of ETH burned, 98% of tips paid to validators, and 83% of total revenue generated from all blobs and type-3 transactions executed on Ethereum.

Impact on Ethereum Rollups

The goal of EIP-4844 was to reduce the operating cost of rollups to make them more affordable to operate and use. The following section assesses the impact the upgrade had on rollup economics and activity. It uses Arbitrum, Base, OP Mainnet, Linea, Scroll, and zkSync for the analysis. These are the top three optimistic and top three ZK rollups by total value locked on the network.

High-Level Economics

To start, the costs for rollups to operate came down substantially after EIP-4844 went live, with the exception of the blopscription mania, Layer Zero airdrop, and August 5 market volatility days. The chart below looks at the daily total costs incurred by rollups, including that of blobdata, posting the blobdata via type-3 transactions, calldata batch commits, and zero-knowledge proofs (in the cases of Scroll, Linea, and zkSync). Since the activation of blobs, including the outlier days and the time it took rollups to implement the use of blobdata, these rollups have paid an average of $556.4k in operating costs daily. This compares to daily averages of $1.9m in the 30-day period, $1.27m in the 90-day period, and $1.07m in the 150-day period leading into the implementation of EIP-4844.

For more on the cost savings of blobs relative to calldata, see this Galaxy Research Dune query, which compares the actual cost of blobdata to what it would theoretically cost if the same blobdata was posted to Ethereum through calldata.

The reduction in operating costs has been met with a decline in revenues captured by these rollups. The day of Layer Zero’s airdrop on Arbitrum and August 5, 2024 were notable exceptions. Arbitrum captured the majority of the revenues on these outlier days. The combined total revenue of the six rollups was $6.09m on these two days, with Arbitrum capturing $4.63m (76%).

Since the activation of blobs, including the outlier days and the time it took rollups to implement the use of blobdata, these rollups have earned an average of $691.3k in revenue daily. This compares to daily averages of $2.33m in the 30-day period, $1.46m in the 90-day period, and $1.2m in the 150-day period leading into the implementation of EIP-4844.

While revenues have declined, the margins of these rollups improved in absolute terms. The chart below shows the bottom-line margin of the observed rollups, calculated as Revenue Less All Operating Costs . Base was the only optimistic rollup in our analysis to have a day of net losses since EIP-4844 went live. On June 20, 2024, the rollup lost $175k. The losses were fully recovered plus some in the three days that followed. Scroll was the only ZK rollup to have negative margin days. On March 26, 2024, the rollup lost $25k but generated $150k in net margin the following day on March 27, 2024. It also lost $6.7k on May 28, 2024, $8k on June 5, 2024, and $9k on June 6, 2024.

Since the activation of blobs, including the outlier days and the time it took rollups to implement the use of blobdata, these rollups have taken home an average of $553k daily. This compares to daily averages of $685k in the 30-day period, $389k in the 90-day period, and $324k in the 150-day period leading into the implementation of EIP-4844.

Rollup margins expressed as a percentage show the relative improvement made in the share of their revenue going towards operating costs. Rollups are keeping more of the revenue they earn after EIP-4844 than before. The chart below separates optimistic and ZK rollup margins to highlight how the use of blobs has impacted each rollup type’s profit margin. The average daily percentage margin of optimistic rollups since March 13, 2024, has been 92.3%; the average daily percent margin of ZK rollups has been 66.7% since the same time. The additional costs around proofs paid by ZK rollups dampens their margins on a relative basis.

More detail on the economic impact of EIP-4844 on rollups can be found on this Galaxy Research Dune data dashboard.

Rollup Activity Before and After EIP-4844

User activity on the six observed rollups picked up immediately after the activation of EIP-4844 and has sustained an elevated amount of use through the 150 days that followed.

The chart below highlights the immediate impact of EIP-4844 on the transaction activity of these rollups. From the period starting December 1, 2023, and ending March 12, 2024, the rollups averaged 3.285 million transactions daily compared to 6.656 million transactions in the 150 days that have followed EIP-4844. This indicates transaction activity more than doubled. An identical trend is observable across all Ethereum rollups.

The decline in costs to use these networks is the main reason for the increased usage of rollups post-Dencun. The chart below highlights the significant reduction in transaction costs on each of the networks.

Arbitrum saw the greatest reduction in transaction costs using the daily median transaction fee. The network averaged a median cost per transaction of $0.37 from December 1, 2023, to the activation of EIP-4844; in the 150 days following EIP-4844 this rate has fallen 94% to $0.02. Scroll saw the lowest decline in median transaction fees of 58%, falling from $0.74 to $0.31.

The rise in activity on the observed rollups has been met with rising transaction failure rates in aggregate. Notably, Arbitrum, Base, and OP Mainnet have seen significant increases in the share of transactions failing on the networks. Base reached as high as a 21% failure rate, Arbitrum 15.4%, and OP Mainnet 10.4% 150 days post-Dencun using the seven-day moving average of daily failure rates.

The failure rates are largely being driven by high activity addresses, likely bots. This Galaxy Research Dune query highlights the failure rates of addresses attempting 100 or more transactions per day. The failure rates for these addresses reached as high as 41.6% on Base, 20.87% on Arbitrum, and 12.85% on OP Mainnet since Dencun activation using a seven-day moving average. In comparison, failure rates of low activity addresses attempting five or less transactions daily experienced a maximum of 4.02% across all of the observed networks over the same period using the same seven-day moving average.

OP Mainnet has a lower transaction failure rate among low activity addresses than it did on March 13, 2024, while Base’s transaction failure rate among low activity addresses is only marginally higher since the same time. Curiously, the transaction failure rate of low activity addresses on Arbitrum increased 545% post-Dencun. The decline in rollup transaction costs and the concentration of transaction failures from high activity addresses instead of low suggests that bot activity is the likely source of the rising failure rates on these rollups post-Dencun.

More detail on the impact of EIP-4844 on rollup activity can be found on this public Galaxy Research dashboard.

The introduction of EIP-4844 and blobdata greatly improved the economics of operating and using Ethereum rollups. This development, however, has shifted some revenue capture from Ethereum L1 and the ETH burn rate to rollup operators, also known as sequencers.

Under the previous calldata model for data availability (DA), Ethereum was capturing up to 77% of the value generated by rollups. With the implementation of EIP-4844, Ethereum now captures 8% of the value generated by optimistic rollups and 33% by ZK rollups, in aggregate ( 1 – Rollup Percent Profit Margin ). This transition aligns with Ethereum’s mission to become an efficient DA layer, as rollups are the primary locus of fee-paying users and transaction activity.

Despite the cost improvements for Ethereum DA, it remains expensive in aggregate, costing rollups $16,473 per gigabyte of blobdata used and $12,458 per gigabyte purchased. High costs are driven by a few days when blob costs surged. When blob demand is low, costs are negligible. Excluding blob costs, type-3 transactions only cost on average $5.22 per transaction. There is concern that costs could escalate if the demand for blobs increases and remains high. One possible solution could be raising the maximum and target number of blobs per Ethereum block.

All the data used in this report was pulled from a public Dune dashboard published by Galaxy Research. It can be found here .

Legal Disclosure: This document, and the information contained herein, has been provided to you by Galaxy Digital Holdings LP and its affiliates (“Galaxy Digital”) solely for informational purposes. This document may not be reproduced or redistributed in whole or in part, in any format, without the express written approval of Galaxy Digital. Neither the information, nor any opinion contained in this document, constitutes an offer to buy or sell, or a solicitation of an offer to buy or sell, any advisory services, securities, futures, options or other financial instruments or to participate in any advisory services or trading strategy. Nothing contained in this document constitutes investment, legal or tax advice or is an endorsementof any of the digital assets or companies mentioned herein. You should make your own investigations and evaluations of the information herein. Any decisions based on information contained in this document are the sole responsibility of the reader. Certain statements in this document reflect Galaxy Digital’s views, estimates, opinions or predictions (which may be based on proprietary models and assumptions, including, in particular, Galaxy Digital’s views on the current and future market for certain digital assets), and there is no guarantee that these views, estimates, opinions or predictions are currently accurate or that they will be ultimately realized. To the extent these assumptions or models are not correct or circumstances change, the actual performance may vary substantially from, and be less than, the estimates included herein. None of Galaxy Digital nor any of its affiliates, shareholders, partners, members, directors, officers, management, employees or representatives makes any representation or warranty, express or implied, as to the accuracy or completeness of any of the information or any other information (whether communicated in written or oral form) transmitted or made available to you. Each of the aforementioned parties expressly disclaims any and all liability relating to or resulting from the use of this information. Certain information contained herein (including financial information) has been obtained from published and non-published sources. Such information has not been independently verified by Galaxy Digital and, Galaxy Digital, does not assume responsibility for the accuracy of such information. Affiliates of Galaxy Digital may have owned or may own investments in some of the digital assets and protocols discussed in this document. Except where otherwise indicated, the information in this document is based on matters as they exist as of the date of preparation and not as of any future date, and will not be updated or otherwise revised to reflect information that subsequently becomes available, or circumstances existing or changes occurring after the date hereof. This document provides links to other Websites that we think might be of interest to you. Please note that when you click on one of these links, you may be moving to a provider’s website that is not associated with Galaxy Digital. These linked sites and their providers are not controlled by us, and we are not responsible for the contents or the proper operation of any linked site. The inclusion of any link does not imply our endorsement or our adoption of the statements therein. We encourage you to read the terms of use and privacy statements of these linked sites as their policies may differ from ours. The foregoing does not constitute a “research report” as defined by FINRA Rule 2241 or a “debt research report” as defined by FINRA Rule 2242 and was not prepared by Galaxy Digital Partners LLC. For all inquiries, please email [email protected] . ©Copyright Galaxy Digital Holdings LP 2024. All rights reserved.

Multi-View Interactive Representations for Multimodal Sentiment Analysis

New citation alert added.

This alert has been successfully added and will be sent to:

You will be notified whenever a record that you have chosen has been cited.

To manage your alert preferences, click on the button below.

New Citation Alert!

Please log in to your account

Information & Contributors

Bibliometrics & citations, view options, recommendations, learning discriminative multi-relation representations for multimodal sentiment analysis.

Modality representation learning is a critical issue in multimodal sentiment analysis (MSA). A good sentiment representation should contain as much effective information as possible while being discriminative enough to be better ...

  • We propose a multimodal sentiment analysis framework named modal-utterance-temporal attention network with multimodal sentiment loss.

Multimodal transformer with adaptive modality weighting for multimodal sentiment analysis

Multimodal Sentiment Analysis (MSA) constitutes a pivotal technology in the realm of multimedia research. The efficacy of MSA models largely hinges on the quality of multimodal fusion. Notably, when conveying information pertinent to specific ...

  • Novel multimodal adaptive weight matrix enables accurate sentiment analysis by considering unique contributions of each modality.
  • Multimodal attention mechanism addresses over-focusing on intra-modality attention.
  • Multiple Softmax ...

Hybrid cross-modal interaction learning for multimodal sentiment analysis

Multimodal sentiment analysis (MSA) predicts the sentiment polarity of an unlabeled utterance that carries multiple modalities, such as text, vision and audio, by analyzing labeled utterances. Existing fusion methods mainly focus on establishing ...

Information

Published in, publication history.

  • Research-article

Contributors

Other metrics, bibliometrics, article metrics.

  • 0 Total Citations
  • 0 Total Downloads
  • Downloads (Last 12 months) 0
  • Downloads (Last 6 weeks) 0

View options

Login options.

Check if you have access through your login credentials or your institution to get full access on this article.

Full Access

Share this publication link.

Copying failed.

Share on social media

Affiliations, export citations.

  • Please download or close your previous search result export first before starting a new bulk export. Preview is not available. By clicking download, a status dialog will open to start the export process. The process may take a few minutes but once it finishes a file will be downloadable from your browser. You may continue to browse the DL while the export process is in progress. Download
  • Download citation
  • Copy citation

We are preparing your search results for download ...

We will inform you here when the file is ready.

Your file of search results citations is now ready.

Your search export query has expired. Please try again.

2D Shape Representation and Analysis Using Edge Histogram and Shape Feature

  • Conference paper
  • First Online: 03 March 2017
  • Cite this conference paper

digital representation and analysis of shapes

  • G. N. Manjula 18 &
  • Muzameel Ahmed 19  

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 516))

947 Accesses

To identify the images, the images have so many components which will give the visual information of the image. Shape diagram are characterized that has to be described the shape features and the properties. The important properties to represent the image are shape property which represented in 2D or 3D in Euclidean plane. To represent the shape there are many methods and techniques are available like canny edge. The major aim of this paper is to find out the shape of the object by comparing with the mathematical formulas and properties of the 2D shapes with different orientation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

digital representation and analysis of shapes

Shape Analysis Using Multiscale Hough Transform Statistics

digital representation and analysis of shapes

A New Approach Toward Invariant Shape Descriptor Tools for Shape Classification Through Morphological Analysis of Image

digital representation and analysis of shapes

Eigenvalue Analysis with Hough Transform for Shape Representation and Classification

C.S. Gode, A.N. Ganar, Image retrieval by using colour, texture and shape features. Int. J. Adv. Res. Electr.

Google Scholar  

H.B. Kekre, P. Mukherjee, S. Wadhwa, Image retrieval with shape features extracted using gradient operators and slope magnitude technique with BTC. Int. J. Comput. Appl. (0975–8887) 6 (8), September (2010)

Y. Kumagai, T. Arikawa, G. Ohashi, Query-by-sketch image retrieval using edge relation histogram MVA2011, in IAPR Conference on Machine Vision Applications , June 13–15, 2011, Nara, Japan

Y. Rai, T.S. Huang, S.F. Chang, Image retrieval: current technique, promising direction and open issues

M. Suman, M. Pawan, A survey on various methods of edge detection. Int. J. Adv. Res. Comput. Sci. Softw. Eng. 4 (5) (2014)

S.V. Chhaya, S. Khera, S. Pradeep Kumar S, Basic geometric shape and primary colour detection using image processing on matlab. IJRET 04 (05) (2015)

S. Rege, R. Memane, M. Phatak, P. Agarwal, 2d geometric shape and colour recognition using digital image processing. IJAREEIE 2 (6) (2013)

S. Rivollier, J.-C. Pinoli, J. Debayle, Shape representation and analysis of 2D compact sets by shape diagrams, in HAL Id: hal-00509444. 3rd ed. by J. Clerk Maxwell, A Treatise on Electricity and Magnetism, vol. 2 (Oxford, Clarendon, 1892), pp. 68–73

D. Zhang, G. Lu, in Review of Shape Representation and Escription Techniques , ed. by K. Elissa, “Title of Paper if Known,” unpublished. Accepted16 July 2003

Download references

Author information

Authors and affiliations.

Department of Information Science and Engineering, Dayananda Sagar College of Engineering, Bangalore, India

G. N. Manjula

Jain University, Bangalore, India

Muzameel Ahmed

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to G. N. Manjula .

Editor information

Editors and affiliations.

Anil Neerukonda Inst. of Tech. & Sci., Prof., Dept. of Computer Sci. & Engg. Anil Neerukonda Inst. of Tech. & Sci., Vishakapatnam, Andhra Pradesh, India

Suresh Chandra Satapathy

Professional Colleges (SRMGPC), Shri Ramswaroop Memorial Group of Professional Colleges (SRMGPC), Lucknow, Uttar Pradesh, India

Vikrant Bhateja

SCIS, University of Hyderabad , Hyderabad, India

Siba K. Udgata

KIIT University, School of Computer Engineering KIIT University, Bhubaneswar, Odisha, India

Prasant Kumar Pattnaik

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer Nature Singapore Pte Ltd.

About this paper

Cite this paper.

Manjula, G.N., Ahmed, M. (2017). 2D Shape Representation and Analysis Using Edge Histogram and Shape Feature. In: Satapathy, S., Bhateja, V., Udgata, S., Pattnaik, P. (eds) Proceedings of the 5th International Conference on Frontiers in Intelligent Computing: Theory and Applications . Advances in Intelligent Systems and Computing, vol 516. Springer, Singapore. https://doi.org/10.1007/978-981-10-3156-4_57

Download citation

DOI : https://doi.org/10.1007/978-981-10-3156-4_57

Published : 03 March 2017

Publisher Name : Springer, Singapore

Print ISBN : 978-981-10-3155-7

Online ISBN : 978-981-10-3156-4

eBook Packages : Engineering Engineering (R0)

Share this paper

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

IMAGES

  1. GitHub

    digital representation and analysis of shapes

  2. Chapter 8 Spatial Data Visualization and Analysis

    digital representation and analysis of shapes

  3. Neural Implicit Representations for 3D Shapes and Scenes

    digital representation and analysis of shapes

  4. 2D Image Digital Representation

    digital representation and analysis of shapes

  5. 3D Shape Representation

    digital representation and analysis of shapes

  6. Digital Symphony: an Abstract Representation of a Symphony, Created

    digital representation and analysis of shapes

VIDEO

  1. 3D Shapes

  2. 2D and 3D Shapes for Kids

  3. Recognizing shapes

  4. L3

  5. BIG's Concept Diagrams in Architecture

  6. 3D Shapes

COMMENTS

  1. What Is Geometric Modeling? Types & Applications

    The article uncovers the fundamentals of digitally representing objects, spanning from elementary mathematical concepts to advanced applications like finite element analysis. It investigates wireframe and surface modeling techniques and their applications in product design and animation and explores the prowess of geometric solid modeling in engineering, manufacturing, and architecture. In ...

  2. Shape analysis (digital geometry)

    Shape analysis is the (mostly) [clarification needed] automatic analysis of geometric shapes, for example using a computer to detect similarly shaped objects in a database or parts that fit together. For a computer to automatically analyze and process geometric shapes, the objects have to be represented in a digital form. Most commonly a boundary representation is used to describe the object ...

  3. Comprehensive Study on Shape Representation Methods for Shape-Based

    This study presents a thorough analysis of shape representation techniques. Furthermore, a taxonomy of strategies for form representation and description is provided. This study aims to review the current achievements comprehensively, highlight the weaknesses and advantages of various existing methods in shape representation methods, addressing current research issues and challenging tasks in ...

  4. Functionality Representations and Applications for Shape Analysis

    In this report, we discuss recent developments that incorporate functionality aspects into the analysis of 3D shapes and scenes. We provide a summary of the state-of-the-art in this area, including a discussion of key ideas and an organized review of the relevant literature.

  5. Princeton Shape Retrieval and Analysis Group

    Project Overview Our goal is to investigate issues in shape-based retrieval and analysis of 3D models. As a first step, we have developed a search engine for 3D polygonal models (check it out by clicking here).The main research issues are to develop effective shape representations and query interfaces.The Princeton Shape Benchmark and Princeton Segmentation Benchmark provide data for ...

  6. Review of shape representation and description techniques

    Shape representation generally looks for effective and perceptually important shape features based on either shape boundary information or boundary plus interior content. ... This study proposes a standardised digital twin shape analysis to evaluate the morphology of grape bunches. Seventeen Pinot Gris and six Pinot Noir clones were considered.

  7. Review of shape representation and description techniques

    Shape is an important visual feature and it is one of the basic features used to describe image content. However, shape representation and description is a difficult task. This is because when a 3-D real world object is projected onto a 2-D image plane, one dimension of object information is lost.

  8. DREAM.3D: A Digital Representation Environment for the Analysis of

    This paper presents a software environment for processing, segmenting, quantifying, representing and manipulating digital microstructure data. The paper discusses the approach to building a generalized representation strategy for digital microstructures and the barriers encountered when trying to integrate a set of existing software tools to create an expandable codebase.

  9. 55:148,55:247 Chapter 6, Part 1

    55:148 Digital Image Processing 55:247 Image Analysis and Understanding. Chapter 6, Part I Shape representation and description: Region identification. Shape representation and description. Defining the shape of an object can prove to be very difficult. Shape is usually represented verbally or in figures.

  10. Functionality Representations and Applications for Shape Analysis

    Thus, in recent years, a variety of methods in shape analysis have been developed to extract functional information about objects and scenes from these different types of cues. In this report, we discuss recent developments that incorporate functionality aspects into the analysis of 3D shapes and scenes.

  11. Shape representation and description

    In Digital and Optical Shape Representation and Pattern Recognition, Orlando, FI, pages 372-376, SPIE, Bellingham, Wa, 1988. Google Scholar. C C Lin, and R Chellappa: Classification of partial 2D shapes using Fourier descriptors. IEEE Transactions on Pattern Analysis and Machine Intelligence, 9 (5): 686-690, 1987.

  12. 55:148,55:247 Chapter 6, Part 3

    55:148 Digital Image Processing 55:247 Image Analysis and Understanding. Chapter 6, Part III Shape representation and description: Region-based shape representation and description. Chapter 6.3 Overview: Simple scalar region descriptors Moments Convex hull Graph representation based on region skeleton Region decomposition Region neighborhood ...

  13. (PDF) Statistics and Analysis of Shapes

    The development of digital imagery has triggered keen interest in further refining and unifying the notion of shape. Shape analysis and recognition is an essential element of many applications ...

  14. PDF A Brief Introduction to Statistical Shape Analysis

    A mathematical representation of an n-point shape in k dimensions could be to concatenate each dimension into a kn-vector. The vector representation for planar shapes (i.e. k = 2) would then be: x = [x1;x2;:::;xn;y1;y2;:::;yn] T (1) 3 Shape Alignment To obtain a true shape representation - according to our definition - location, scale and ...

  15. PDF Representations, Metrics and Statistics for Shape Analysis of Elastic

    of invertibility of representation, it is difficult to map statistical quantities back to the object space. Specific Goals: In this paper, our goals are to develop tools for a comprehensive statistical analysis of complex shapes with graphical/network structures. Specifically, we seek: (1) a shape metric that is invariant to the

  16. PDF Introduction to Shape Analysis

    Multiplying a centered point set, z = (z1; z2; : : : ; zk 1); by a constant w 2 C, just rotates and scales it. Thus the shape of z is an equivalence class: [z] = f(wz1; wz2; : : : ; wzk 1) : 8w 2 Cg. This gives complex projective space k 2 CP - much like the sphere comes from equivalence classes of scalar multiplication in Rn.

  17. paulcaron/INF574-Digital-Representation-and-Analysis-of-Shapes

    INF574-Digital-Representation-and-Analysis-of-Shapes. Labs from course INF574 Digital Representation and Analysis of Shapes by Prof. Maks Ovsjanikov. These labs were designed by Prof. Luca Castelli Aleardi.

  18. Representations, Metrics and Statistics for Shape Analysis of Elastic

    Past approaches for statistical shape analysis of objects have focused mainly on objects within the same topological classes, e.g., scalar functions, Euclidean curves, or surfaces, etc. For objects that differ in more complex ways, the current literature offers only topological methods. This paper introduces a far-reaching geometric approach for analyzing shapes of graphical objects, such as ...

  19. Comparative analysis of shape descriptors for 3D objects

    One of the basic characteristics of an object is its shape. Several research areas in mathematics and computer science have taken an interest in object representation in both 2D images and 3D models, where shape descriptors are a powerful mechanism enabling the processes of classification, retrieval and comparison for object matching. In this paper, we present a literature survey of this broad ...

  20. PDF 2D Geometric Object Shapes Detection and Classification

    object representation utilized, appearance and shape of the object be displayed for locating ... Comprehensive review of shape analysis techniques is available in reference Kaiser, Zepeda and Boubekeur (2019), Loncaric (1998), Nayagam and Ramar (2015), and ... allow to draw many battle scenarios symbols directly on a digital map.

  21. Deciphering the Feature Representation of Deep ...

    The enormous success of deep learning stems from its unique capability of extracting essential features from Big Data for decision-making. However, the feature extraction and hidden representations in deep neural networks (DNNs) remain inexplicable, primarily because of lack of technical tools to comprehend and interrogate the feature space data.

  22. PDF INF574-Digital-Representation-and-Analysis-of-Shapes/TD4/Lab4 ...

    Labs from course INF574 Digital Representation and Analysis of Shapes by Prof. Maxs Ovsjanikov - paulcaron/INF574-Digital-Representation-and-Analysis-of-Shapes

  23. Electrical Modeling and Performance Analysis of ...

    The adoption of a feasible bump shape exerts a significant impact on the functionality of a 3D IC. The cylindrical bump structure, considered among the most prevalent shape, endures significant delay, power loss and crosstalk challenges. The tapered based ...

  24. PDF Image Representation and Description

    Optical flow. Motion of brightness patterns in image sequence. Assumptions for computing optical flow: Observed brightness of any object point is constant over time. Nearby points in the image plane move in a similar manner. + ∂ f ∂ f ∂ f. 2 f ( x dx , y + dy , t + dt ) = f ( x , y , t ) + dx ∂. dy + ∂ ∂ dt + O ( ∂ ) y t.

  25. PDF Shape representation and description

    Shape representation and description 193 substantial variations in the first derivative of object boundaries often yields suitable information. Examples of this can be found in alphanumeric character description, technical drawings, ECG curve characterization, etc. Shape is an object property which has been carefully investigated in recent

  26. 150 Days After Dencun

    Key Takeaways. There have been 2,225,958 blobs purchased at an average cost of $1.59/ blob and 1,104,315 blob carrying Layer 1 transactions at an average cost of $5.22/ transaction in the 150-day period following the implementation of EIP-4844 (as of August 10, 2024).

  27. Multi-View Interactive Representations for Multimodal Sentiment Analysis

    Despite the focus on representation learning in MSA, current methods often prioritize recognition performance through modality interaction and fusion. However, they struggle to capture multi-view sentiment cues across different interaction states, limiting multimodal sentiment representations&#x2019; expressiveness.

  28. 2D Shape Representation and Analysis Using Edge Histogram ...

    Shape feature: Shape is the core point of visual feature analysis and representation. The image content also can be determined on the basis of the characteristics of the image. Figure 2 shows the working flow of the shape feature. The shape character or features are center of gravity, Mass, ratio, angle, number of edges etc.