The strategy of a representation does not require preferably thorough and extensive gathering and processissing of information, but adept compression of the present information content. As its name already implys is the amount of potential information solely being represented. Representation, therefore, stands in place for the entire information. The concept of representation is not just a strategy for reducing computational power or sensors. It’s rather a modeling paradigm and even a technical necessity to construct and automate the modeling of deterministic regularities based on statistical and complex processes.
The concept of representation originates from the statistical physics. Via quantum physics laws regularities of fluctuations are defined, out of this laws for atoms, out of them laws for molecules and out of those laws for molecular structures etc. In the end stands a representive e-modul on macro level, that represents deformation conditions of the scales beneath with a value for the rigidity. These scale transitions are calculated by the technique of the so called representive volume elements. With the now gained models and models and geometrical information a component is eventually being computed. The stiffness is, of course, also definable via trial, yet only a deep understanding and scale-overlapping correlation visualize problems with e.g. the stiffness or firmness. Normally, these cognizances should be used to eradicate error sources or for optimizations, in general. Representations function for systems, which are composed of a great amount of self-similar processes and interacting elements, in which leverages are tendentially centered. In the statistical physics representations are usually expectation values in the form of integrals about all considered and possible conditions. It does not always directly result in the expectation values in the sense of median values, though, but in the statistical distribution of them. Only due to the limit theorm, all distributions approximate a Gauss-Poisson- distributation and therefore the expectation values converge to median values. This does atleast apply to stable processes. One could also turn it the other way around and say, that stable processes are identified by their approximation to gauß-distributed quantities. This is long known in the field and e.g. the silent recondiction for the six sigma method in quality management. A signal empowering system is a technical implementing, which is generalizing this concept for production plants automatically and with high resolution.
The concept of representation averts that information is locally gathered on a huge scale, while excluding crucial data. Goal is the preferably thorough representation of the production, not a great gathering of arbitrary data. In contrast to the big data principle, the implementing of a signal empowering system does not contradict the pursuit of minimal effort.
It would, e.g., be possible, to install an infrared camera, instead of a single temperature sensor. In this case, one would primally receive many signals for each pixel. A selective averaging would be used as representation. Redundant information, though varying on the surface, show no direct increment value and are therefore a waste of ressources. A representation stands in place of and is exemplary for an event, that occurs somewhere on the surface resp. in a volume and is spreading there in some way. For example: A sentence is said. Depending on where one is arranging the microphone, the registrated amplitudes differ in various frequency ranges and overlie asynchronously via reflections on the walls. There is one strong signal in general, that varies depending on the place of measurement. A voice recognition, tough, always leads to the same result and condenses the information even on a transcript incl. further information of the speaker. A single locally gathered signal can, therefore, represent the events of the whole room, without the claim of capturing the entire distribution of the acoustic pressure. The signal can change during it’s way from the origin with the purest information content to the point of measurement. The representation has to gather only the sufficient fraction of the repercussion. It does not have to be on the hot spot of the cause. The choice of a representation point is therefore often relatively steady. Alternatively to the former mentioned infrared camera, one can also calculate the leverages of reflections via several microphones. This resembles a constant fully-automatical measurement at the center of the acoustic source. A fine addition for an SES, but often not as useful for an early alert system as it should be. Like already shown before, does the pattern detection even function without this kind of effort. Additional information for localization is, of course, available and pattern are generally more precisely identified for such a system. However, the effort is correspondingly hgher many times over. Realized is something like this in a system like Amazon Echo Dot. Nethertheless, here are also seven microphones used to generate a single representive signal, that then moves to the acoustic recognition. The position of the speaker is located as an extra. A precise acquistion does, therefore, not mean, that more data is gathered, but simply an increase of the quality of the representive signal. The representation strategy is diametrically opposed to the big data approach, whereby as much data as possible is initially put into a data lake, to start the evaluation there. The concept of a signal empowering system rather resembles that of a data warehouse.