Chapter VIII Accounting and Controlling in Uncertainty : concepts , techniques and methodology

The fuzzy set approach has progressively been introduced into many areas of organisational science in order to compensate for certain inadequacies in traditional tools. Indeed behaviourists and expected utility researchers have long been studying the role of ambiguity and vagueness in the human decision making process (e.g., Einhorn and Hogarth, 1986) and have highlighted the paradoxes linked to the use of probability theory (e.g., Tverski et al., 1984). The organisational sciences are particularly representative of systems with human interaction, in which information is affected by fuzziness (Zadeh, 1965). The areas of application for fuzzy set theory are characterised by: the importance of the role assigned to human judgement in decision making, the use of qualitative information, the dominant role of subjective evaluation and, more generally, the processing of information affected by non probabilistic uncertainty.


INTRODUCTION
The fuzzy set approach has progressively been introduced into many areas of organisational science in order to compensate for certain inadequacies in traditional tools. Indeed behaviourists and expected utility researchers have long been studying the role of ambiguity and vagueness in the human decision making process (e.g., Einhorn and Hogarth, 1986) and have highlighted the paradoxes linked to the use of probability theory (e.g., Tverski et al., 1984). The organisational sciences are particularly representative of systems with human interaction, in which information is affected by fuzziness (Zadeh, 1965). The areas of application for fuzzy set theory are characterised by: the importance of the role assigned to human judgement in decision making, the use of qualitative information, the dominant role of subjective evaluation and, more generally, the processing of information affected by non probabilistic uncertainty.

Accounting and controlling: the limitations of traditional methods
Following this tradition, a certain number of research projects have developed an analysis of the role of ambiguity, uncertainty, or imprecision in accounting and controlling (March, 1987;Zebda, 1991;Casta, 1994;de Korvin, 1995).
• Firstly, with regard to ambiguity: the statements, terms and rules used in accounting are affected, to a greater or lesser degree by the ambiguity of the concepts and/or by the imprecision of the data. Most of the linguistic concepts dealt with by financial accounting are "social constructs" which have their origins in professional practices and the process of standardisation. This type of imperfect information has consequences for: the decision-making process itself, the difficulty of determining the degree of truth, using Boolean values, the formulation of an assertion (for example, "the costs are too high") the inadequacy of the processing of uncertainty by methods based on measures of probability.
• Secondly, with regard to imprecision: the accounting model, which implicitly refers to the physical metaphor, is based on numerical processing. The often illusory search for precision is at the origin of the syndrome of arithmetical exactitude (Morgenstern, 1950). The strictly numerical concept which underlies the accounting representation model is not easily compatible with the imprecise and/or uncertain nature of the data or with the ambiguity of the concepts (for example: imprecision and subjectivity of the accounting evaluations, poorly defined accounting categories, the subjective nature of any evaluation of risk. Although data appears in pseudo-symbolic form at the entry to the accounting process, the purely numerical model of processing is incapable of understanding imprecision, ambiguity and uncertainty in the data (Figure 1). It numerises it more or less arbitrarily, then puts it through processes based on elementary arithmetic. The operation is expressed in a conversion, which is often implicit, of imprecise and/or uncertain concepts, into a crisp numerical representation alone and which is finally accessible to the user of accounting numbers.

Figure 1: Accounting framework and information processing
• Thirdly, with regard to uncertainty: the ambiguity of the concepts, the impossibility of defining classes of objects which have precise boundaries, the inability to determine the degrees of binary truth, are factors which express the non relevance of the postulate of the excluded middle. In this context which is specific to systems with human interaction, uncertainty cannot be modelled by measures of probability. The unsuitability of the strictly numerical accounting model with regard to imperfect information engenders several types of disfunction: • the threshold effect: this is expressed by sudden discontinuities in the behaviour of users of accounting numbers when the accounting measures, or derivative measures, go from admissible values to non acceptable values Users which are nevertheless close. This is the case, for example, with the contractual clauses which impose limits on debt ratios.
• the effect of the premature reduction of non probabilistic entropy: it results from the set of decisions taken at accounting model level in view of numerising imprecise, vague, or uncertain information. This set of actions leads to the hidden encroachment on the decision-making power of the user of the information produced by the accounting system. • the loss of the usefulness of the information for taking a decision: as Zadeh points out in his principle of incompatibility, "as the complexity of a system increases, our ability to make precise and yet significant statements about its behaviour diminishes until a threshold is reached beyond which precision and significance (or relevance) become almost exclusive characteristics".
The fuzzy set approach constitutes a coherent numerical modelling framework within which to treat knowledge and information affected by imprecision, partial ignorance, ambiguity and/or uncertainty within a humanistic process framework. According to Zebda (1991), the fuzzy set approach may be used to solve accounting problems when: "problems involve ambiguous variables, relationships, constraints and goals; binary classifications are unrealistic; high levels of precision are not attainable and the level of accuracy of the estimates required for the analysis is not fixed". This is not a decision theory, but rather an approach which allows the linguistic modelling of vague phenomena using imperfect information.

Applications of the fuzzy set approach to accounting, controlling and auditing
As a result of this observation, a certain number of researchers applied the fuzzy set approach to the field of accounting, controlling and auditing. These applications may be placed in three categories (for more details, see: Zebda, 1989;Siegel et al., 1995 and: • applications to auditing include problems such as: internal control evaluation (Cooley and Hicks, 1983), audit sampling (Lin, 1984), materiality judgement (Kelly, 1984), and going-concern audit decision (Spiceland et al., 1995); • applications to management accounting and controlling include problems such as: cost variance investigation (Zebda, 1984), management tools (Kaufman and Gil Aluja, 1986), cost-volume-profit analysis (Chan and Yuan, 1990), cost allocation , key success factors (Rangone, 1997), estimating costs (Mason, 1997), and target costing (Zollo, 1999); • applications to financial accounting includes problems such as: financial statements analysis, and financial reporting (Gil Aluja, 1989;Gil Lafuente, 1993;Rhys, 1991;Casta, 1994).

Specific methodological problems
The aim of this chapter is to examine the specific problems posed by the use of tools resulting from fuzzy set theory in the field of accounting, controlling and auditing. There are various types of methodological problems: • Firstly, the taking into account of the imprecision of accounting numbers whilst respecting the double-entry principle. Since it is based on an elementary arithmetical structure, the traditional accounting model -and the underlying measurement-cannot deal with the imprecision and/or uncertainty affecting the data, nor the ambiguity relative to the formulation of concepts. However, certain precautions must be taken with regard to the introduction of fuzzy set theory. It is not desirable to proceed with a direct transposition of fuzzy arithmetic tools. Indeed, the elaboration of fuzzy financial statements means that the semantic specific to the accounting measurement of value and income of the firm must be re-examined (Bry and Casta, 1995).
• Secondly, the evaluation of the audit risk by means of linguistic variables. The classic approach to audit risk evaluation is based on modelling of the probability type. The integration of ambiguity in the audit risk analysis enables the auditor's decision-making behaviour to be better expressed. The application of fuzzy set theory to the modelling of the audit approach focused for a long time on certain parts of the auditor's approach (evaluation of internal control, materiality decision, audit tests, etc.).The use of linguistic variables in a comparative problem -verbal versus numerical processing-to model the final process of the aggregation of judgements leading to the expression of the auditor's opinion is more recent .
• Thirdly, the taking into account of the interactive nature of the variables in fuzzy arithmetic. Fuzzy arithmetical calculation is based on the application of the generalised extension principle (Zadeh, 1965). Its application requires the interactive relationships which link the variables to be taken into account (Dubois and Prade, 1981). If this condition is not met, fuzzy calculation will generate major artificial imprecision. In the field of organisational science, and more particularly in accounting, controlling and auditing, the variables are often interactive, but the precise relationships are not known. This characteristic excludes any purely analytical approach. For this reason, we will develop a fuzzy calculation approach which allows the interactive nature of the variables to be taken into account on the basis of qualitative knowledge of the nature of the relationships (Lesage, 1999 b).
• Fourthly, the modelling of the existing synergy between the assets of a firm. As a process which aggregates information and subjective opinions, the financial evaluation of the company raises very many problems relating to ideas of measurement, imprecision, and uncertainty. The methods used in the process of financial evaluation are based on classic operators of aggregation possessing properties of additivity. Through their construction, these methods abandon the idea of expressing the phenomena of synergy (or redundancy) linked to overadditivity (or under-additivity) that may be observed between the elements of an organised set such as a firm's assets. This synergy effect (or conversely, redundancy) may lead to the value of the set of assets being superior (inferior) to the sum of the values of each asset. This is particularly the case in the presence of intangible assets. We will explore the possibilities offered by non-additive aggregation operators (Choquet, 1953;Grabisch et al., 1995;Sugeno, 1977) with the aim of modelling this effect with the help of fuzzy integrals (Casta and Bry, 1998).
• Fifthly, we will examine the consequences of taking into account the imperfection of information on the process of constructing management tools. This contingent analysis, centred on the tool-user relationship, will lead us to highlight the link which exists between the knowledge representation system and performance (Lesage, 1999 b).
This chapter is divided into three sections: the first is dedicated to accounting models and imperfect data, the second to accounting models and imperfect information on relations, and the last deals with the interaction between management decisions and imperfect information.

ACCOUNTING MODELS AND IMPERFECT DATA
Because its calculation structure stems from elementary arithmetic, the traditional accounting model is not designed to handle problems linked to the imperfection of information. We are particularly interested in two extensions of this which concern: • the imprecision and/or uncertainty affecting the data used in the elaboration of financial statements, • the ambiguity relating to the definition of concepts when audit risk is being evaluated. For these problematic areas, we propose extensions to the traditional accounting methods. The extension of the accounting model to the processing of imprecise, even partly subjective, quantitative information, is based on the introduction of fuzzy set theory (see Kaufmann and Gil Aluja, 1986;Gil Lafuente, 1993;Rhys, 1991). However, this approach to drawing up fuzzy financial statements requires a thorough re-examination of the semantics of the accounting measurement of value and income. numerical representation; the set R of real numbers for example. The concept of measurement used in accounting has been influenced by two schools of thought: • the classic approach -the so-called measure theory -directly inspired by the physical sciences according to which measurement is limited to a process of attributing numerical values thereby allowing the representation of the properties described by the laws of physics and presupposing the existence of an additive property; • the modern approach -the so-called measurement theory-which has its origin in the social sciences and which extends the theory of measurement to the evaluation of sensorial perceptions as well as to the quantification of psychological properties (Stevens, 1951(Stevens, , 1959. The quantitative approach to the measurement of value and income is present in all the classic authors for whom it is a basic postulate of accounting. The introduction by Mattessich (1964), Sterling (1970) and Ijiri (1967Ijiri ( , 1975 to Stevens' work provoked a wide-ranging debate on the modern theory of measurement but did not affect the dominant model (see Vickrey, 1970). Following criticisms of the traditional accounting model whose calculation procedures were considered to be simple algebraic transformations of measurements (Abdel-Magid, 1979), a certain amount of work was carried out, within an axiomatic framework, with a view to integrating the qualitative approach. However, the restrictive nature of the hypotheses (complete and perfect markets) (see Tipett, 1978 ;Willet, 1987) means that their approach cannot be generally applied.
Efforts to integrate the qualitative dimension into the theory of accounting did not come to fruition. From then on, the idea of measurement which underlies financial accounting remained purely quantitative.

Generally accepted accounting principles
In a given historical and economic context, financial accounting is a construction which is based on a certain number of principles generally accepted by accounting practice and by doctrine. An understanding of economic reality through the accounting model representing a business is largely conditioned by the choice of these principles. Among them we can distinguish those which govern methods of evaluation (the principle of recording values at their historic costs, the principle of conservatism, the principle of consistency, etc.), and those which define the protocol of observation (principle of entity, principle of the independence of the accounting periods, the ongoing concern principle, etc.). The principle of double-entry occupies a specific place. By prescribing, since the Middle Ages, the recording of each accounting transaction from a dual point of view, it laid down an initial formal constraint which affected both the recording and the processing of the data in the accounts. Later, with the emergence of the idea of the balance sheet, the influence of this principle was extended to the structuring of financial statements. Despite the appearance of other ways of organising the data, the principle of double-entry is still widely identified with the technology of financial accounting.

The measurement of value and income in accounting: the balance sheet equation
The accounting model for the measurement of value and income is structured by double-entry through what is known as the balance sheet equation. It gives this model strong internal coherence, in particular with regard to the elaboration of financial statements. In fact the balance sheet equation expresses an identity in terms of assets and liabilities: (1) Assets (T) ≡ Νet Equities (T) + Debts (T) Since this is a description of the tautological nature of the company's value, this relationship is, by nature, verifiable at any time. In particular, the application of the principle of double-entry leads to each transaction being recorded in such a way that the equation remains verified at T+1 in the form: (2) Assets (T+1) ≡ Net Equities (T) + [Revenues (T,T+1) -Expenses (T,T+1) ) + Debts (T+1) Thereafter, the technique of double-entry accounting enables the company's assets and result to be calculated simultaneously, by means of the sequential updating of the balance sheet equation after the recording of a transaction.

The algebraic structure of double-entry accounting
On a formal level, the underlying algebraic structure has been explained by Ellerman (1986). Going beyond Ijiri's classic analysis in integrating both the mechanism of the movement of accounts and the balance sheet equation, Ellerman identifies a group of differences: a group constructed on a commutative and cancelling monoid, that of positive reals supplied by addition. He calls this algebraic structure the Pacioli group. The Pacioli group P(M) of a cancelling monoid M is constructed from a relationship of particular equivalence ® between ordered couples of the elements of M.
• The ordered couples (or more exactly the classes of equivalence of ordered couples) are an extension of the usual "two columns accounts" known as • If 0 is the neutral element of the law + in M, [0//0] is trivially the neutral for + in all the classes of ordered couples.
• The relationship of equivalence ® defines the equality of two T-terms with the equality of their crossed sum as the starting-point: • If the reflexivity and the symmetry of the relation ® are clear, the resulting transitivity of the cancelling nature of M will be: Subsequently, each transaction is recorded in the form of the addition of a 0term 3 , thereby allowing the balance of the initial balance sheet equation to be maintained. We can thus schematise the method of recording transactions starting from the sequential updating of the balance sheet equation by the addition of different 0-terms: initial balance sheet equation (0-term) + transaction (0-terms)

Extension of double-entry accountancy to fuzzy numbers (Bry and Casta, 1995)
On any ordinary set E, a fuzzy subset  A is defined with the help of a function µ  A measuring the membership degree of each element x ∈ E to the set  A: A fuzzy number refers to any convex fuzzy subset with a numerical referential such that there exists at least an element with a membership degree equal to 1. In practice, to facilitate the speed of the calculation, we usually consider fuzzy numbers of a certain type: type LR, noted in an abridged manner (m, α, β) LR , or fuzzy intervals of the type LR noted (m 1 , m 2 , α, β) LR , as well as trapezoidal fuzzy numbers, noted (a,b,c,d). From a semantic point of view, it must be said that fuzzy numbers will be arbitrarily seen as imperfectly specified values attributed to perfectly referenced objects. For example, in the assertion "the stock of products amounts to around 10,000 units", the fuzzy number "around 10,000" is an imprecise measurement associated with a perfectly defined object (the volume of the stock of products). This measurement may be considered to be virtually knowable with all the precision required. We will call "referent" the precise unknown measurement associated with the well-defined object we are measuring. We can still see fuzzy numbers as an uncertainty compared to crisp numbers which "really exist" in the sense that we can imagine the possibility of being able to specify them completely by acquiring the missing information. In order to avoid all ambiguity, we have noted the fuzzy numbers in lower case ( z ) when referring to them as a value, and in upper case ( Z ) when they represent referenced fuzzy numbers 4 .

Procedure of immediate extension to the fuzzy numbers "in value"
The first usable approach to fuzzifying double-entry accounting consists in directly constructing an extension of the Pacioli group, on the basis of the fuzzy addition of the values and of an extension of the equivalence relation ® relative to the crossed sums.

A. Construction of the fuzzy monoid on the real positives ("in value")
First, this approach allows us to rapidly obtain a monoid on the fuzzy reals "in value". Indeed: • The set of positive fuzzy reals supported by addition is a commutative monoid fuzzy to the extent that: * the addition of two positive fuzzy reals is trivially a positive fuzzy real, * the addition is associative and commutative on the set of fuzzy reals, * the neutral element is the ordinary real 0, which is a particular fuzzy real.
• On the other hand, it is not trivial that the set of positive fuzzy reals is cancelling, since the subtraction of a fuzzy number from itself (as a value) does not give zero. This property is obvious however if we consider the set of fuzzy numbers of the LR type, L and R being two functions of data form: Then, the cancelling nature of the monoid is highlighted by term-by-term identification: x y x y x y Finally, as a special case, the set of trapezoidal fuzzy numbers (or fuzzy intervals) provided by addition, is a commutative and cancelling monoid.

B. Construction of the Pacioli group on the fuzzy reals "n value"
It is possible to construct the Pacioli group associated with the monoid of the LR positive fuzzy reals provided by fuzzy addition by extending the relation ®. This still remains trivial a relationship of equivalence: However, such an extension sets a problem of a semantic nature. Indeed, in the case of positive reals, the initial sole object of the relationship ® was to enable classes of accounts with the same balance to be defined without having to calculate this balance as a difference (subtraction not being defined for all the elements of a monoid). What does this extended relation signify? To clarify this point, we will examine the consequences it creates, for example, in the case of LR numbers:  source: (Bry and Casta, 1995)

Figure 2: Extended relationship ® "in value"
It is obvious, in the fuzzy case, that this equality defined from the extension of the "crossed sum" in no way corresponds to that of the following fuzzy differences: In conclusion, we can say that a Pacioli group (commutative) has been formally constructed on the commutative monoid of fuzzy reals "in value" supported by addition. However, this structure has nothing in common with a "group of differences". In such conditions, the Pacioli Group cannot be seen as the natural structure of double-entry accounting extended to fuzzy numbers.

C. Extension of the calculation of balances and the resolution of equations
When calculating the balance of each account (or when calculating the net equities in the balance sheet), it is, in the final analysis, always necessary to consider a fuzzy equation of the type: x y z = + . By proceeding naively, we obtain the equation, x x z z = ! + ( ) , which is totally false when the operator (-) defines the fuzzy subtraction (except where z is a crisp real). 5 Moreover, although in the crisp case, the data and the unknown factors play the same numerical role in an equation, the same is not true in the case of a fuzzy equation. To legitimise a certain operating use of fuzzy equations, it is important to give these an unambiguous semantic. It may, in particular, seem appropriate to make the following choices: • to consider the "referenced" fuzzy reals, • to distinguish the equality "in value" of the identity. The identity will be interpreted in terms of constraint: it has to be satisfied by those imprecisely defined crisp numbers, that are known as fuzzy numbers. It is not, therefore, a question of the simple equality of the fuzzy numbers, but rather of an identity of the "referents" (that one might note as = = to distinguish it from fuzzy equality, and called "strong equality"). Henceforth, z = = x signifies that z and x are, in fact one and the same measurement of the same thing, with the obligation for the measurements to be logically equal. The least precision acquired on z is carried over on x , in such as way that equality is maintained in all the statements of information possible. 6 • to suppose that the uncertainties affecting two different pieces of data are independent (in the sense that we may acquire all the information on the first without increasing the precision of the second). On the other hand, contrary to the case of two pieces of data considered as independent, the subtraction of a piece of data from itself will have as a result the real 0 (it is not therefore a question of fuzzy subtraction). • to consider the unknowns as totally produced by the data and the constraints, and not as exogenous quantities: consequently, any acquisition of information about the data strictly conditions the precision of the unknowns. The latter will be noted a * , A * , fuzzy numbers without an asterisk being considered as data. • In these conditions, the methods of resolution are very close to the methods traditionally applied to crisp equations: indeed, if Z is any fuzzy number, but perfectly referenced, Z ! Z == 0 allows the simplification of the expressions. In such a context, it is nonetheless necessary to perform all the calculations in an exclusively literal manner, subsequently proceeding to simplifications of fuzzy numbers with the same "referent". Finally, in the simplified expression, in which each referent appears once at the most, the data are replaced by their numerical value and the calculations will be performed with fuzzy operators.

Procedure for the extension to "referenced" fuzzy numbers
As we have just seen, it is necessary to reason using referenced fuzzy numbers in order to take into account the total dependence induced by the identity relationship linking the two terms in which a transaction is recorded.

A. Construction of fuzzy T-terms
In order to proceed to the extension of double-entry accounting, it would seem completely inopportune to use the Pacioli Group previously constructed on the monoid of positive fuzzy reals "in value" supported by addition. Indeed, to use this structure when considering fuzzy 0-terms, that is to say T-terms of the type [ ] a a / / , would be to disregard the meaning of the double-entry principle: the recording of any transaction must be perfectly balanced in its construction. In a case where uncertainty exists concerning the amount of the transaction [d //c], it is the exact values of the debit and credit which must be equal (and not merely the membership functions). Between the debit and credit of a fuzzy transaction, there therefore exists a stronger link than the simple equality of fuzzy numbers. When a debit is theoretically equal to a credit, it is not a question of the equality of the fuzzy numbers, but of the identity (of the "referents") or strong equality.
We can illustrate these situations by the general case of a couple [ ] It is therefore necessary to construct a structure (the simplest, but also, as close as possible to the Pacioli group) in which we can express the strong links between uncertainties. It is a question of extending the sum to "referenced" positive fuzzy reals, then to construct the set of all the possible sums of "referenced positive" fuzzy reals, initially supposed independent.
• The extension of the addition to the "referenced" fuzzy reals constitutes an application of the principle of calculation on the interactive variables (Dubois and Prade, 1981) to a particular case where the variables are linked by an identity relationship. Depending on the nature of the interactivity, the extension of the addition will be defined as follows: 1. for fuzzy reals A and B independent, A + B corresponds to the fuzzy addition, 2. for two totally interactive fuzzy numbers (in the sense of having an identity relationship), the result of the addition is the multiplication by a scalar, that it to say: !k, m " N, k. A + m. A = (k + m)A 3. for the intermediate case of two operands obtained as a combination of the two preceding cases, the properties of commutativity and associativity should be used.
Thereafter, the extension to the set of fuzzy reals obtained by any summation of independent "referenced" fuzzy reals is direct. For example, It should be pointed out that the latter addition is the fuzzy sum.
• The set S of any sums obtained from the set of "referenced" fuzzy reals with independent uncertainties is a commutative monoid. The uncertainties of the elements of this monoid are clearly no longer a priori independent. The set of any sums obtained from the set of positive "referenced" fuzzy reals with independent uncertainties will be noted as S+.
We can extend all the other operations of classic arithmetic to the "referenced" fuzzy reals in the same way. For example, in order to take total interactivity into account, we are led to calculate the differences between elements of the monoid S+ in the following alternative way: In this case, A has a symmetrical element for the addition in the set of "referenced" fuzzy reals, which is: The monoid S+ is trivially cancellative; the subtraction of a "referenced" fuzzy number from itself comes to 0: • The set of the couples of elements of S+ supported by the addition of the couples enables us to construct a second commutative monoid. The couples of elements of S+ which are not a priori identical will be noted with the help 7 of a single separator /, that is to say [ ] Henceforth, an element of the monoid of the couples of elements of S+ will be written: The difference between its components is thus: The operators which remain, once the strongly equal numbers have been eliminated, are the fuzzy operators.
By definition, the addition of the couples is internal to this set. It is always associative, commutative and has [0 / 0] as a neutral element, which belongs to the set. As a result, this set forms a monoid. On the other hand, no element, unless it be the neutral, admits symmetry in this set, which does not, therefore, constitute a group.

B. Relationship of equivalence
In the preceding set we can define the following equivalence relationship: The neutral element of the quotient structure is trivially the class of couples of The quotient structure therefore forms a commutative group 8 .

C. Construction of accounting in fuzzy double-entry
In making the distinction between fuzzy numbers "in value" (provided by fuzzy operations and fuzzy equality) and "referenced" fuzzy numbers (supported by an extension of these operations which respects the identity of the referent and by strong equality), we have been able to maintain the essentials of Pacioli formalism: • the symmetry of the roles of debit and credit in the recorded entries, • the strict and non-fuzzy accounting balances, • the interpretation as a group of differences, here extended to fuzzy differences, • as in the crisp case, this accounting is founded on the 0-terms (T-terms of the ). It consists of updating an initial balance sheet equation by adding the 0-terms known as transactions to it. Indeed, starting from the initial balance sheet equation: we note that equality between the members to the left and to the right is not the simple equality of fuzzy numbers, but strong accounting equality. Thereafter, such an equation will carry an unknown (the balance of the accounts). It is possible to write this equation in the form of the 0-term: The mechanism whose principle is described below enables us to extend doubleentry accounting to the processing of fuzzy numbers, maintaining the essentials of classic formalism, provided that the following rules for processing are respected: the operations should be performed in a literal manner with numerical calculations only being carried out on simplified expressions, that is to say when a given referent appears no more than once. Indeed, only a literal calculation enables the elimination of referents which are identical to both debit and credit, thus preserving maximal theoretical precision 9 . .
Finally, in order to set out the principles of construction for fuzzified financial statements (balance sheet and income statement), we have suggested extending double-entry accounting, as a recording, processing, and aggregation process to transactions represented by fuzzy numbers. The direct approach, which consists of extending the underlying Pacioli group structure to fuzzy numbers immediately seems unfounded. After re-examining the semantics specific to the accounting context of measurement of the value and income, we have suggested a certain number of operators on positive fuzzy reals linked by strong information constraints, such as those stemming from the application of the principle of double-entry to the processing of data. Under these conditions, it is theoretically possible to extend double-entry accounting to the treatment of fuzzy numbers, while at the same time maintaining the essentials of classic formalism. Such an approach is nonetheless unsatisfactory. Indeed, in order to limit the mechanical growth of the imprecision arising from the redundancy of information, and in particular to preserve the significance of the aggregated values, we have introduced a "reference system" for transactions. This enables us to define, on a symbolic level, a simplification operator intended to offset the disadvantages of fuzzy subtraction. However, when faced with a large number of transactions, such an approach is not really operational.

Linguistic model of audit risk evaluation
The aim of certifying accounts is to be reasonably sure there are no significant 10 errors in published financial statements. Auditors use evaluation models which allow them to calculate audit risk (i.e. the risk that the auditor has made a mistake when giving his opinion). However, knowledge which is effectively available during the process of audit risk evaluation is largely characterised by imperfection. The classic models are therefore not sufficiently reliable. Consequently, we wished to develop and test a linguistic audit risk evaluation model .which would tend to reconcile these two complementary aspects (formalization/judgement). Based on the American Statements on Auditing Standards (SAS, AICPA,1992), it enables the preliminary application of a theoretical audit risk evaluation model to real situations.

The SAS standard model
Let us review the role of the auditor. A company is obliged to maintain accounts for each transaction it carries out and to periodically publish financial statements for the benefit of third parties (shareholders, banks, suppliers, creditors, government departments, etc.). These statements must be established in accordance with the prevailing accounting standards and the auditor is bound to check this when he certifies the accounts. In this context, two opposing constraints limit the auditor's action: 1) To carry out sufficient work to be able to justify an opinion on the accounts: quality constraint, 2) To respect a cost (for the company) and a reasonable time-limit in order to render the use of the accounts relevant to a third party: economic constraint.
This problem is solved by a partial verification (economic constraint), the results of which are extrapolated to cover the financial statements in their entirety (quality constraint). The problem therefore lies in the elaboration of a decision making process which will enable the auditor to move from a set of partial opinions obtained from elementary work to an overall opinion on the financial statements in their entirety.
The existence of professional standards makes it possible to justify the quality of the work carried out under the economic constraint. Some standards set out complete audit risk evaluation models. In this respect, the American standards are exemplary for two reasons: they present an extremely formal evaluation model and they constitute the basis of the methodology adopted by the big international audit firms. They therefore represent a natural support for the design of any audit risk evaluation model.
Faced with the complexity of the auditor's task in arriving at an opinion, the approach adopted by the SAS consists of a triple decomposition: 1) Decomposition of the object of the auditor's evaluation: the auditor expresses his opinion on the financial statements from a partial evaluation (economic constraint) of the accounts C 1 , ..., C m , by means of tasks t 1 ... , t o .

2)
Decomposition of the audit risk (AR) according to the following steps: • First,the company must have erroneously transcribed information originating from its environment in the form of an accounting entry (Inherent Risk: IR), • Then, this error will not have been corrected by the company's internal control (Control Risk: CR), • Finally, the auditor will not have corrected this error (Detection Risk: DR). The auditor's detection work in itself can be divided up into three major families, in which the respective risk of not detecting an error is measured. (analytical coherence review (Ra), exhaustive verification of key items (KI), and statistical sample(Stat).

3)
Decomposition of the auditor's objective. The final objective of having no errors in the financial statements has also been defined by the American standards in accordance with five characteristics, known as " Assertions " (SAS n°55, AICPA 1992): • Existence or occurrence: everything which is recorded must be correctly recorded, • Completeness: everything which must be recorded is comprehensively recorded, • Rights and obligations: every commitment must appear in the financial statements, • Valuation or allocation: valuation methods must be correctly applied, • Presentation and disclosure: the presentation standards must be respected.
This approach therefore enables the auditor to allocate the means effectively by identifying areas of risk into which he must delve more deeply. It dictates that each task performed by the auditor allows the evaluation of at least one of these components. We therefore arrive at the auditor's equation which must be verified for each account and for each assertion: The most frequent use of this equation is its probability expression: In general, practice sets the following tolerable audit risk: p K (AR) ≤ 5%.
The problem lies in the evaluation, then the aggregation of the information gathered at matrix level to calculate the level of effective final error risk and to compare it with the level of risk considered acceptable. In practice, this model constitutes the basis of the methodologies and assessment systems developed by audit firms to respect both the quality constraint and the economic constraint. But it poses two main types of problem: 1) Evaluation of human judgements by numbers Apart from the difficulty of evaluating conditional probabilities (Tvsersky, Kahneman and Slovic, 1984), it has been observed that, in practice, auditors choose to use words rather than numbers to express their judgement on the procedures, in particular IR (evaluation of the risk linked to the environment). This component of risk is the subject of assessments concerning the foreseeable nature of the business and the competence of the management, etc., factors which are more a matter of judgement than of a precise measure supported by numbers. This is why, in practice, most audit firms have recourse to the linguistic evaluation of some risks (IR, CR and Ra mainly) (Janell and Wright, 1992).

2) Aggregation of interdependent risks
The probabilistic aggregation raises the problem of knowledge of the aggregation structure. It presupposes a network in the form of a tree, where the connections are clearly identified thereby allowing the impact of an element of proof at assertion level, or of the account, or of the financial statements in their entirety, to be measured accurately. However, no element has as yet been established which proves perfect knowledge of these interrelationships (Krishnamoorthy, 1993). Conversely, the difficulty of linking the assertions to an evaluation of the accounts has been highlighted (Waller, 1993). Many criticisms (Cushing and Loebbecke, 1983) have been levelled concerning the probabilistic treatment of the auditor's equation, in particular with regard to the conditions of independence of the variables (for example: the taking into account of prevention and detection effects. Although both are defined as components of CR by the SAS, the preventative effect is also taken into account in IR). Other criticisms (Dusenbury, Reimers and Wheeler, 1996) concern the necessary complexity of the model when evaluating at assertion level as a result of the quantity of connections due to the inference structure which is based on the rules of Bayes (see Lea, Adams and Boykin, 1992).
These criticisms have led Srivastava and Shafer (1992) to develop a belief-based audit risk assessment model. The basic idea of this theory is to evaluate, not an element A directly, but all the parts of the set {A ; A }, where A = "absence of error" event and A = "presence of error" event. For example, by replacing probability by a belief function, this model allows us to distinguish between very different audit proofs with regard to their impact on the real reliability of the financial statements. This improvement is obtained through the admittance of the subjectivity of the evaluation. Since its publication, this model has been the subject of some experiments (Dusenbury, Reimers and Wheeler, 1996), (Dutta and Srivastava, 1993). However, this approach, contrary to probabilistic treatment, is not applied by audit firms because it continues to use numerical evaluation as well as a probabilistic (multiplicative) aggregation.
Finally, the problems raised have led to an attempt to design a model which presents the two following characteristics: • a linguistic evaluation, in order to conform to practice and which also enables the processing of perfect information and statistical information when it exists, • an aggregation of the partial ignorance type.

Design of an audit risk evaluation model with imperfect information
The first non numerical approach to a problem close to the evaluation of audit risk was made by Cooley and Hicks (1983) with regard to the evaluation of the internal control risk (CR). The authors were able to bring about this improvement by representing information by means of linguistic variables, thereby respecting the imperfect nature of the judgements expressed by the auditors. However, major drawbacks (in particular the membership functions of Zadeh's canonical linguistic variables which are therefore not context dependent, as well as its limitation to the sole problem of internal control) prevent its effective use, which explains why this model has not been tested since its publication.
On the other hand, the representation of information characterised by uncertainty and the imprecision of linguistic variables allows an evaluation which respects their nature, while at the same time placing them within a mathematical framework (fuzzy sets approach) which allows them to be aggregated. These characteristics lead us to fall back upon this method of processing information in solving the problem of audit risk evaluation.

A. Evaluation by means of "context dependent" linguistic variables
The so-called "experts' method" (Aladenise and Bouchon-Meunier, 1997) 11 enables us to determine the kernels and supports of linguistic variables by using psychometric questionnaires. These variables express the auditor's confidence in the procedure tested, which will be represented, for simplification of the calculation process (Klir and Yuan, 1995), by trapezoidal fuzzy numbers (TFN). This list must be completed by the values [cer] (certitude corresponding to the point {x=10 ; µ(x)=1} and [stat] (membership function determined by the distribution of probabilities arising from statistical sampling procedures). The auditor's equation therefore becomes an aggregation of judgements concerning the confidence the professional has in the various components and expressed by one of the following seven evaluations: (confidence): very weak, weak, moderate, strong, very strong, and: [cer] and [stat] B. Partial ignorance aggregation structure  An example of a partial ignorance aggregation structure might be the following: This model is based on the following elements: • [IR] evaluating the environment, its evaluation constitutes the framework within which the other evaluations ([ADR]) are placed.
[ADR] is the "pessimistic" aggregation of all the detection procedures: they are not considered to have conjugated effects on the evaluation of the risk of final error. Overall confidence will thus be given by the procedure which produces the highest degree of confidence, which mathematically is expressed by the use of the operator max (noted ∨): The aggregation between these two major elements is carried out using an aggregator Ω being interpreted as a fuzzy OR (Bühler, 1994), with the following boundary values: = average • The adjustment of this compensation is carried out by means of the parameter λ which reflects the privileged taking into account of the "existence" assertion when the audited account is an assets account and the "completeness" assertion when it is a liabilities account. • Partial ignorance does not concern the t% of [KI] evaluated with certainty, since it is a question of the total validation of t% of the audited account. This equation constitutes a suggestion for a linguistic audit risk evaluation model. It has been tested by a French company, member of one of the five leading world audit firms.

Experiment of the linguistic audit risk evaluation model
The experiment consisted of a comparison between the real overall evaluations (AR) expressed by experienced auditors on accounts really audited by them on the one hand, and on the other, theoretical overall evaluations calculated using the linguistic model and with real elementary evaluations provided by the auditors.

A. Preliminary results
During interviews, two facts emerged which corroborated the remarks already made on the real audit risk evaluation process: • Evaluation with the help of words to designate confidence in the error detection procedure never caused difficulties, thereby explaining their common use, • The most frequently occurring overall confidence (AR) is Strong (69,6 % of the answers), thereby constituting the level of reference, under which the account cannot be validated.

B. Test model SAS 47
We first applied the data collected (after re-processing the linguistic values expressing confidence in the probability of the risk of error) to the probability based model laid down in the SAS standards. Despite the bias inherent in this kind of transformation, the results conformed to the results of previous studies (Dusenbury, Reimers and Wheeler, 1996).  Figure 5 shows a high degree of underestimation (the average deviation is 19%, the standard deviation 8%). However, in their example the SAS standards suggest adopting an overall risk of 5%, which is extremely restrictive (this value corresponds to a degree of confidence which is closer to certitude than to Very strong).
Consequently, it appears that if the overall objective is stricter in the standards than in practice (confidence Strong instead of a risk of error 5 %), on the other hand, this type of aggregation is more conservative in reality than that suggested by the probabilistic model. We consider that this result can be explained by the very nature of the information network: since it is not probabilistic in nature, used wrongly, it would be endowed with distribution of information properties (aggregation by the product) which it does not really possess, thereby leading to an overestimation of overall confidence 12 .

C. Linguistic test model
The application of the linguistic model previously put forward gave the following results ( Figure 6): • The average deviation is 0,0 %: the model is therefore centred on real results. It should be noted that the use of the parameter λ enables us to adjust the model both on the assets accounts and the liabilities accounts. Its optimisation is carried out with the following values λ Assets = 0,18 and λ Liabilities = 0,03: the risk linked to the environment therefore has greater importance for the liabilities accounts than for the assets accounts, in accordance with the preceding reasoning. In addition, these values enable us to obtain similar behaviour (average, standard deviation) whatever the nature of the account. • The distribution of the deviations is comparable to the probabilistic model (standard deviation of the deviations = 8,8 %). It seems however, that the most relevant indicator in the case of a linguistic model consists in the calculation of the number of identical evaluations provided by the model and by the auditor in terms of linguistic values. In 66,7 % of cases, the model provided the same linguistic evaluation as the auditor, the other cases being an immediately superior evaluation (23,2 %) and an immediately inferior evaluation (10,1%). • An analysis of the deviations according to the evaluation of the environment (IR) shows highly heterogeneous behaviour on the part of the model with a very clear tendency to distance itself from the average as the IR grows farther away from a value located between Moderate and Strong. One possible explanation of this phenomena could be linked to the auditor being unfamiliar with operating in an "extreme" environment: he therefore modifies his appreciation of the risk by giving greater weight to the importance of the environment. Effects which are psychological 13 in nature therefore come into play which were not taken into account by the linguistic model. We should remember that we are limited to one standard simple fuzzy operator of the average type between IR and ADR. Taking the environment more fully into account would therefore involve the modelling of risk behaviour, or the refinement of the effects of the redundancy of information between evaluations of the different types of procedures.
In conclusion, the linguistic model we are suggesting allows us to bring practice (the use of words to represent knowledge) and theory (construction of evaluation models to quantify risk) closer together. Its originality is to highlight the potential of a risk evaluation model within the framework of fuzzy logic (evaluation in the form of linguistic variables, aggregation by means of a fuzzy operator). Even though certain characteristics of this study limit its scope, both from an experimental (real data from a single firm), and conceptual (recourse to a "black box" type of process to explain elementary evaluations) point of view, the robustness and flexibility of a linguistic model help to identify some phenomena and to measure their impact on the auditor's behaviour. This study has therefore highlighted the particular role of the environment, as well as the non-symmetry of existence and completeness concerning the assets and liabilities accounts. It represents a first attempt to formalise two well-known phenomena which are experienced daily when carrying out an audit.

ACCOUNTING MODELS AND IMPERFECT INFORMATION ON RELATIONS
The management field in companies is characterised by a certain number of particularities which need to be considered in semantic terms prior to any transposition of the instruments in terms of fuzzy sets. Indeed, taking into account the nature of the imperfection weighing on the information available at model input level most often means searching for further knowledge. This approach, which tends to respect the nature of the available information while at the same time rendering its processing more complex, aims to improve coherence and relevance at decision-making level. This is particularly the case in two areas: the taking into account of interactivity between the variables and the processing of the synergy or redundancy relation.

Fuzzy interval calculations and related variables
In a management context, most of the variables are interactive to a fairly high degree. Fuzzy interval calculations require a modification of the extension principle (Zadeh, 1965) in order to integrate the relations constraining the interplay of the variables. Failure to take this necessity into account would lead to a highly pessimistic calculation which would amplify the imprecision surrounding the results obtained as output from the model. Such a situation forbids any direct transposition of fuzzy interval calculi methods to management models. A certain number of solutions to this general problem have been suggested (Dubois and Prade, 1981). We would put forward an operational solution enabling management expertise to be translated into a set of relations on the variables by the use of an Algorithm for Modelling Relationships (AMR) ( Lesage, 1999 b).
These algorithms allow a fuzzy relation to be constructed between the values of the domain of each of the couples of variables used for a fuzzy arithmetical calculation. Indeed, a management situation is characterised by diverse financial and economic relations between the values of the variables, whether this be a temporal relation or the relations between the variables themselves. Such knowledge is most often a question of qualitative analysis (for example, see Bailey et al., 1990). For example, we know that price influences sales: certain sales levels will not be attained without a promotional campaign. Managers are generally familiar with these relations. The AMR can be used to codify the managers' qualitative knowledge so that it can be integrated into any model based on fuzzy arithmetic.

Uses of the AMR for interactivity modelling
In case of non interactive fuzzy variables X and Y, the application of the generalised extension principle stands as follows: When X and Y are not interactive, then this formula must be completed (Dubois and Prade, 1980): with R(X,Y) being the relationship between X and Y.
The usual definition of the generalised extension principle therefore increases imperfection since all the couples (X,Y) are taken to be related with a degree of membership of 1. The application of this general formula results in classic calculation formulae on the TFN's. Conversely, in the case where a relation has been defined between X and Y, imperfection decreases, which is useful in the case of a model serving as an aid to decision-making. Let us take the following example ( Figure 7): X = (2/0,5; 3/1; 4/0,5) Y = (2/0,25; 3/0,5; 4/0,75; 5/1; 6/1; 7/0,5) By applying the preceding formula, we thereby obtain: X + Y 4 5 6 7 8 9 10 11 µ X+Y 0,25 0,5 0 0,75 1 0 0,5 0,5 We can thus calculate the impact of the existence of a relation between X and Y on the imperfection of the information output for example by taking cardinality 14 as an index noted C: C Case 1 = 0,25+0,5+0,5+0,75+1+1+0,5+0,5 = 5 C Case 2 = 0,25+0,5+0+0,75+1+0+0,5+0,5 = 3,5 We have therefore reduced the imperfection by 30% by taking into account the relation between X and Y. Although the usefulness of taking into account the relations between the variables may seem obvious, the difficulty lies in their evaluation, because only rarely are they known with any degree of precision.
Between not taking them into account and their over-precise evaluation, the AMR take an intermediary route, putting forward standard relation profiles, enabling imperfection to be partially reduced without, for all that, losing any of its relevance.

Construction of the AMR
The AMR favours the following three principal types of relation: (1) no interaction: We do not know what the relation is between X and Y and they are therefore considered to be unconnected variables, This type of qualitative information is often all that is available: we have no direct knowledge of the relation, only the category to which it belongs. If we set aside case #1 (for which we consider µ R(X,Y) (i,j) = 1, for all i and j), the problem posed is that of determining a profile for each of the two other categories. We suggest an algorithm enabling us to determine a profile for the relation R(X,Y), irrespective of the number of elements in each of the X and Y sets, but also easily adaptable to individual needs.

A. Choice of Scale
Although it is easy to fill in a square matrix representing a strictly proportional (or inverse) relation between two variables (simply by filling in the diagonal with 1), the general case where each of the variables possesses a different number of values must be treated while respecting the constraint of symmetry. Indeed, this property makes it possible to avoid slanting the relation by not favouring a priori any of the areas of the matrix. The AMR are based on a founding principle of fuzzy logic: losing in precision to gain in relevance. What we cannot do exactly, at the most detailed level, we will construct at a more unrefined level 15 , by elaborating an identical scale for the two sets and by making the levels of this scale correspond to real values.
Arbitrarily, each support of the two related variables is divided into five categories 16 : very weak (vw), weak (w), moderate (m), strong (s), very strong (vs) Then, each value of this support is allotted to one of these five categories by the application of the following algorithm, functioning for a number N of values and determining the factor I: N = 5 *I + k, k ∈ {0,1,2,3,4} I = Int ( N / 5)) The different cases stand as follows: By favouring the value m by default, the algorithm respects the symmetry of the categorisation, in order to avoid a bias towards inferior or superior values. The AMR therefore allows us to divide up any support of a fuzzy subset in accordance with a single scale, enabling us to relate two fuzzy subsets with different numbers of elements, while preserving the symmetry of their relation.

B. Valuation
It now only remains to evaluate the relation, i.e. to define the degree of membership of the couples (X,Y). The kernel and the support of the relation should be determined first, bearing in mind that the only certitude is that there exists on the one hand the most likely couples, and on the other, the least likely couples, and that their nature depends on the relation used (increasing or decreasing). * Standard profile of increasing functions The rules for the most likely couples are: if X is vw and Y is vw then On the basis of these supports and kernel, the degree of membership of the domain's other values may be calculated by maintaining two properties: • symmetry: we have already mentioned this and it is important to preserve it.
• convexity: indeed, when we give a vague categorisation such as "proportional relation" in an imperfect information situation, it is inconceivable to have changes of direction within the relation: we must gradually pass from support to kernel, without a break, and continuously in the same increasing/decreasing direction. The simplest procedure is linear: in lines (step = n X -(2*I X -1), I X = Int ( n X / 5)) and columns (step = n Y -(2*I Y -1), I Y = Int ( n Y / 5), then to take the average of these two values. In this case, the relation we finally obtain is truly convex and symmetrical.

C. Tuning
The preceding formulation of the algorithm constitutes a median position in terms of imperfection. It is possible to increase or diminish this imperfection depending on the level of available knowledge: • Increasing the imperfection: if the level of knowledge is so low that it does not even allow us to know if, for example, the couples (vs,vw) are impossible in an increasing relation, we therefore take a value µ (vs,vw) > 0. The values situated in the corners of the matrix can then take a value other than zero, which allows continuity between the increasing and decreasing profiles via the Independence Profile. • Decreasing of the imperfection: if the level of knowledge is superior to that required by the three profiles, it is possible to reduce the imperfection by reducing the support: bringing the corners with the minimal value nearer to the centre of the matrix is sufficient. This tuning allows us to gradually move from general to restricted profiles.
These adjustments are carried out in accordance with the level of knowledge concerning the relation uniting the two variables.

Application of the AMR to a calculation of turnover
The relation of the turnover is as follows: T = Q * P. The stages are as follows: 1) Each of these variables is collected by asking questions: "In your opinion, what is the value (minimal, maximal, likely) of this variable during the period under study ?". These data allow us to obtain the TFN defined by (L 0 , L 1 , U 1 , U 0 ). Example: P = (7,00; 7,50; 7,70) in steps of 0,10 $ Q= ( Table 1: Determination of a fuzzy relation using the AMR 5) Finally, the product T = P * Q is obtained from ( Table 2):  We must then take the maximum of these membership degrees, by considering the classifications. For example, with intervals of 100 $, the following fuzzy set is obtained: If we had not taken the relation between Q and P into account, we would have obtained, after application of the traditional TFN calculation formulae: That is, a reduction of the imperfection of (6 -7)/7 = -14,3 %, as measured by C = Σ i µ(x i ).
The approach developed from the AMR can be used, at operational level, in all models of financial situations requiring fuzzy interval calculations. It enables us to reduce the imperfection of the output information by incorporating existing information which has hitherto not been taken into account in models. It should be noted that an important application of the AMR consists in taking the temporal evolution of the variables into account. One only needs to apply the Cartesian product as operator to be able to model the dynamic behaviour of any management relation (see experimental applications, Lesage, 1999 b)

Modelling synergy and financial evaluation
The determination of the value of a set of assets results from a subjective aggregation of viewpoints concerning characteristics which are objective in nature. As we have seen, the usual methods of financial evaluation are based on additive measure concepts (as sums or integrals). They cannot, by nature, express the relationships of reinforcement or synergy which exist between the elements of an organised set such as assets. Fuzzy integrals, used as an operator of nonadditive integration, enable us to model the synergy relation which often underlies financial evaluation. We will present the concepts of fuzzy measure and fuzzy integrals (Choquet, 1953;Sugeno, 1977) and we will then suggest various learning techniques which allow the implementation of a financial evaluation model which includes the synergy relation (Casta and Bry, 1998).

Unsuitability of the classic measurement concept
First, methods of evaluating assets presuppose, for the sake of convenience, that the value V of a set of assets is equal to the sum of the values of its components, that is: The property of additivity, based on the hypothesis of the interchangeability of the monetary value of the different elements, seems intuitively justified. However, this method of calculation proves particularly irrelevant in the case of the structured and finalised set of assets which makes up a patrimony. Indeed, the judicious combination of assets (for example: brands, distribution networks, production capacities, etc.) is a question of know-how on the part of managers and appears as a major characteristic in the creation of intangible assets. This is why an element of a set may be of variable importance depending on the position it occupies in the structure; moreover, its interaction with the other elements may be at the origin of value creation (Figure 8). Secondly, the determination of value is a subjective process which requires points of view on different objective characteristics to be incorporated. In order to model the behaviour of the decision-maker when faced with these multiple criteria, the properties of the aggregation operators must be made clear. Indeed, there exists a whole range of operators which reflect the way in which each of the elements can intervene in the aggregated result such as: average operators, weighted average operators, symmetrical sums, t-norms and t-conorms, mean operators, ordered weighted averaging (OWA). Depending on the desired semantics, the following properties may be required : continuity, growth (in the widest sense of the term) in relation to each argument, commutativity, associativity, and the possibility of weighing up the elements and of expressing the way the various points of view balance each other out, or complement each other. However, these operators of simple aggregation do not allow us to fully express the modalities of the decision-maker's behaviour (tolerance, intolerance, preferential independence) or to model the interaction between criteria (dependence, redundancy, synergy) which is characteristic of the structuring effect.

Fuzzy measure and fuzzy integrals
The concept of fuzzy integrals is a direct result of fuzzy measure and extends the integral to measures which are not necessarily additive. It characterises integrals of real functions in relation to a fuzzy measure. (Denneberg, 1994;Grabisch et al., 1995).

A. The concept of fuzzy measure
For a finished, non-empty set X, composed of n elements, a fuzzy measure (Sugeno, 1977) is an application µ, defined using the set P (X) of the parts of X, with values of [0,1], such as: The classic additivity axiom is replaced by a weaker property: monotony. As a result, for a disconnected E and F, a fuzzy measure can, depending on the modelling requirement, behave in the following manner: -additive: The definition of a fuzzy measure requires the measurements of all the measurable parts of X to be specified, that is to say the 2 n coefficients to be calculated.

B. The concept of fuzzy integrals
The redefinition of the concept of fuzzy measurement implies calling into question the definition of the integral in relation to a measure (Sugeno 1977;Choquet, 1953). Sugeno's integral of a measurable function f: X → [0,1] relative to a measure µ is defined as: Using only the operators min and max, the use of this integral is not appropriate when modelling synergy. Choquet's integral of a measurable function f:X→[0,1] relative to a measure µ is defined as: .. x n } Moreover, by defining the "indicator function" as 1(A=B) which takes the value 1 if A=B and 0 if it does not, we have: .dy) If we define as g A (f) the value of the expression ∫ 1(A={x | f(x) > y}).dy, Choquet's integral is expressed in the following manner: Choquet's integral uses the sum and the usual product as operators. It extends Lebesgue's integral to a measure which is not necessarily additive (Figure 9). As a result of monotonicity, it is increasing in relation to the measure and to the integrand. Choquet's integral can naturally be used as an aggregation operator.

C. Principal applications of fuzzy integrals
Fuzzy integrals found an especially suitable field of application in the control of industrial processes (Sugeno, 1977). This approach then enabled fresh approaches to economic theory to be made on themes such as non-additives probabilities, expected utility without additivity (Schmeidler, 1989), and the paradoxes relating to behaviour in the face of risk (Wakker, 1990). More recently, they have been used as aggregation operators for the modelling of multicriteria choice, particularly in the case of problems of subjective evaluation and classification (Grabisch and Nicolas, 1994;Grabisch et al., 1995). With regard to these latter applications, fuzzy integrals possess the properties usually required of an aggregation operator whilst providing a very general framework for formalization.
The fuzzy integral approach means that the defects of classic operators can be compensated for . Including most other operators as particular cases, fuzzy integrals permit detailed modelling: • The redundancy through the specification of the weights on the criteria, but also on the groups of criteria. Taking into account the structuring effect makes it possible to take interaction and the interdependency of criteria into account: µ is under-additive when the elements are redundant; µ is additive for the independent elements; µ is over-additive when expressing synergy and reinforcement. • The compensatory effect: all the degrees of compensation can be expressed by a continuous movement from minimum to maximum. • The underlying semantic to the aggregation operators. (Casta and Bry, 1998) Modelling using Choquet's integral presupposes the construction of a measure which is relevant to the semantic of the problem. Since the measure is not a priori decomposable, it is theoretically necessary to define the value of 2 n coefficients. We would suggest an indirect econometric method for estimating the coefficients. Moreover, in a case where the structure of the interaction can be defined approximately, it is possible to reduce the combinatory element of the problem by restricting the analysis of the synergy to the interior of the useful subsets (see Casta and Bry, 1998). Determining fuzzy measures (that is to say 2 n coefficients) brings us back to a problem for which very many methods have been elaborated (Grabisch and Nicolas, 1994;Grabisch et al. 1995). We would suggest a specific method of indirect estimation on a learning sample made up of companies for which the firm's overall evaluation and the individual value of each element in its patrimony are known. Let us consider I companies described by using their overall value v and by a set X of J real variables x j representing the individual value of each element in the assets. The value of the variable x j for the individual i is described as f i and the function defined by . We are trying to determine the fuzzy measure µ, so that overall, we come closest to:

Learning method of fuzzy measures
Let us call the variable defined as: the "generating variable" corresponding to part A (of the set of variables x j ). Thus, we obtain the model: in which u i is a residue which must be globally minimised in the adjustment. It is possible to model this residue as a random variable or, more simply, to restrict oneself to an empirical minimisation of the type ordinary least square. The model given below is linear with 2 J parameters: the µ(A) for all the subsets A of variables x j . The dependent variable is the value; the explanatory variables are the "generators" corresponding to the parts of X. A classic multiple regression provides the estimations of these parameters, that is to say the required measure.
In practice, we consider the discrete case with a regular subdivision of the values: y 0 = 0 , y 1 = dy , .... , y n = n.dy and for each group A of variables x j , we calculate the corresponding "generator" as:

1(A={x | x i > y h })
The following principle will be used to interpret the measure thus obtained for : ⇔ synergy between A and B ⇔ mutual inhibition between A and B It should be noted that the suggested model is linear in relation to the "generating" variables, but obviously strongly non-linear in relation to the variables x j . Moreover, the number of parameters only expresses the most general combination of interactions between the x j . For a small number of variables x j , (up to 5 for example) the calculation remains possible. It is a question not only of calculating the parameters, but also of interpreting all the differences of the type . For a greater number of variables, one might consider either doing a preliminary Principal Components Analysis and adopting the first factors as new variables, or restricting a priori the number of interactions sought.

A. Method of estimating the measure: numerical illustration
Either a set of 35 companies evaluated globally (value V) or from a individual evaluation of three elements of the assets (A, B and C). Since the values A, B, C are integer numbers (between 0 and 4), we have divided the value field into intervals dy = 1. Calculating the "generators" is very simple. For example, let us take the individual i=3 described in the third line of the figure 10, and let us represent its values for A, B and C: source : (Casta and Bry, 1998) Figure 10: Calculation of the "generator" for the firm i=3 From this we deduce the value of the "generators" g {C} (i) = 1, g {A,B,C} (i) = 2, the other "generators" have a value of 0. For the whole sample of companies we have the following "generators" (Table 3):   The interpretation is simple in terms of structuring effect: each of the criteria A, B and C has more or less the same isolated importance. But there is a strong synergy between A and B on the one hand, and between B and C on the other; A and C partly inhibit each other (possible redundancy between the two). There is no synergy which is the specific property of the 3 grouped criteria (the measure {A,B,C} is little different from the sum {A,B} +{C}, as with {B,C} +{A} for example, see 17 ).

B. A priori limitation of the combinatory effect of the interactions
Instead of considering the set P (A) of the parts A to define the measurement, we have only considered a limited number of these parts, the measure µ of the other parts being defined univocally by a rule of extension. The combinatory element of the problem can therefore be partially controlled. (Casta and Bry, 1998).
At the end of this review of the possibilities offered by fuzzy integrals, we have observed that there exist very many potential fields of application in finance for this category of operators. They enable the effects of micro-structure, synergy and redundancy, which are obscured by linear models, to be analysed in detail.
There is a price to be paid for this sophistication in terms of the complexity of the calculations. However, we have tried to show that these techniques allow us to limit the purely combinatory effects which appear at the learning stage of the measure.

MANAGEMENT DECISIONS AND IMPERFECT INFORMATION
Besides the extension of the relevant areas of classic accounting and controlling models, imperfect information has cognitive implications for the decision process. After a brief survey of the major drawbacks of uncertainty for decision making, we will propose a new theoretical approach. Supported by experimental results, it enables us to stress on the cognitive advantages of imperfect information for decision making.

Ambiguity, Intolerance and Financial Reporting
The role of ambiguity intolerance in the financial reporting decision-making process has been the subject of a great deal of research. Research papers have centred on the study of the cognitive characteristics of the individuals when faced with a choice or when needing information. Research on the reaction of an individual in a situation of ambiguity shows that he may, according to his cognitive characteristics, adopt one of two diametrically opposed attitudes: ignore the problem or seek further information (Budner, 1962;Norton, 1975). In the field of accounting choice, work on the relationship between the individual's cognitive characteristics and the demand for information (nature and quantity) deemed necessary for taking a decision, has produced contradictory results: certain researchers show that cognitive characteristics have an impact on the individual's preferences with regard to information (Dermer, 1973): the less an individual tolerates ambiguity, the more he will search for information to increase his confidence in his decisions (see Gul, 1986, for applications to auditor's opinion). Other researchers, on the contrary, have not observed any significant effect (McGhee et al., 1978). More recently, the relationship which exists between an accountant's degree of ambiguity intolerance, professional affiliation and/or level of education on the one hand, and on the other, his desire to have alternative accounting methods at his disposal has been studied (Faircloth and Ricchiute, 1981). This research has established that level of education is the only dependent variable which influences an accountant's wish to have several accounting methods at his disposal. More generally, this type of study does not allow a significant relationship to be established between ambiguity intolerance and the accountant's desire to possess alternative accounting methods (for example, FIFO, LIFO in stock evaluation).

The reduction of cognitive biases
The acceptance of the imperfection of information generates a change of technique: we abandon boolean algebra for fuzzy logic (Gil Aluja, 1996). But it also introduces a change of paradigm. Modelling becomes a representation of knowledge, which places it within a cognitive paradigm. Fuzzy logic gives us new ways with which to improve the cognitive processing of information.

A. The "function of data" -based model
The models traditionally used in management accounting derive from a physical conception of the company. The values of the variables are measured, then incorporated into a model which allows us to obtain perfect information on the phenomenon. Within this framework, information has the status of data: no matter who has done the measuring, the result given by the model will always be the same. This is the syntactical dimension of knowledge which stems from the idea of information as a physical, and therefore measurable, object (Eco, 1988). The ontological approach to this question is treated by Shannon's theory of information. It considers information to have a measurable size (the unit is the bit) that must be transmitted with maximum efficiency. The question which then arises concerns how to encode the information into a signal which will ensure that the receiver will receive what the transmitter has sent clearly, and therefore act appropriately. This apparent simplicity hides two problems: redundancy and entropy. Shannon's theory (Shannon and Weaver, 1949) resolves these by evaluating the receiver's expectations in the form of probability. This theory is extremely formalised which enables its practical application in the field of transmission techniques. The phenomenological approach to information considered as data 18 is based on the algorithmic theory of information, initiated by Kolmogorov and Chaitin (for a detailed presentation, see Chaitin, 1987). According to this theory, the measurement of the information contained in (or transmitted by) a system is the same size as the smallest program needed to define it. Such an approach therefore allows us to determine to what extent a system can be modelled from the point of view of the feasibility of the implementation of a program enabling us to process the available information. By combining these two approaches we arrive at a syntactical model based on of "functions of data". For example, cost analysis has always privileged the syntactical dimension of modelling, whether from an ontological (confusion between objective information since it derives from a measurement, and knowledge) or phenomenological point of view (limited complexity of cost calculation algorithms, use of processing modes which both justify and are justified by, measurement theory). This dimension is indeed that of perfect information. If traditional tools stop at the calculation of data and do not go as far as the modelling of action, it is perhaps because there is a deeply "disruptive" element between these two points: the human being and the meaning he bestows on information.

B. The representation of knowledge-based model
As soon as we accept the imperfection which results from subjectivity, the status of the information used by models changes: information becomes knowledge, that is to say, information which comes from the process of interpreting the environment. So information may exist (journalistic, for example) which will be interpreted differently depending on the opinion of each person. A difference is therefore established between the physical support of information (syntactical dimension) and its interrelation with the person who has obtained "knowledge" (Newell, 1982) by giving meaning to the information (semantic dimension).
The ontological aspect of the semantic approach concerns the semantic structures. Various reference points serve as supports for sign theories: • We are attempting to move from a problem of representation to a problem of the representation of knowledge, or of how to (re) encode an information message as a meaningful message (theory of information). • Any cultural object (and therefore knowledge such as cost) is an object of communication, which may be reduced to a system of symbolic signs, a sort of semantic code which serves to transmit the meaning carried by that object (theory of the sign, or semiotics). Within such a framework, a cost analysis system becomes a sign system, bearing meanings carried by a specific code. The difference between this and an information transmission system lies in the following two points: • The transmission of meaning is a reversible two-way process. Meaning is not only transmitted from the transmitter to the receiver: the reverse also occurs. • The fundamental uncertainties of the meaning system. It is a sort of backdrop which is essential to the knowledge system and which is resisted by an information system. Indeed, one of the first observations we can make on the meaning process is that it is an indirect process, where the phenomena cannot be reduced to a simple causal explanation (Eco, 1988).
Peirce's theory of triadic interpretation (Peirce, 1978) enables us to understand this distinction. This theory is based on three complementary concepts: • "Firstness": raw, general, data, the potentiality of existence; • "Secondness": a particular fact, raw action/reaction; (Pavlov and Skinner's reflex phenomenon: neuro-physiological domain); • "Thirdness": introduction of the mental element, the attachment of a particular experience to a general law, judgement on action and the mode of action: domain of cognitive interpretation. "Thirdness" appears when there is no automatic application of tested routines of action: in this case, triadic interpretation implements the triad object-signinterpreter. The signal operates as a signifier and does not provoke action directly. It is a code which requires many complex operations of understanding and decision-making on the part of the receiver. A meaning process is therefore present as well as the information process. This deep distinction generates opposition between the determinism of the information system and the uncertainty of a knowledge system. Understanding the underlying mechanisms of the knowledge system is the domain of the cognitive sciences (neurosciences, artificial intelligence, philosophy, psychology, linguistics). This burgeoning of techniques, approaches, and cultures generates sometimes conflicting hypotheses. D. Andler (1992) suggests grouping the three most common ontological hypotheses together into three main categories in the following manner: • The description and explanation of cognitive phenomena on a purely physicalist level (bio-chemico-physical) has proved inadequate and must therefore be completed by a representational level; • The transformations these states will undergo are not merely physical, but may be considered as calculations on the representations carried by these states; • Although any cognitive system may be defined as a process between a stimulus and the reaction to this stimulus, the internal processes seem always to be autonomous with regard to their causes.
It can be seen, therefore, that our problem is clearly a matter for the cognitive sciences. More specifically, cognitive psychology has helped to highlight two phenomena which are at the origin of the uncertainty of the interpretation process: • Denotation: the denotation of a sign system designates the class of objects that it represents: it is therefore a representation shared by all, notably as a result of regular learning. • Connotations: the connotations (and not connotation) are interpretations which are attached to signs which have the particularity of being alternatives, even optional.
Unlike denotation, connotations are not shared, or at least, not in the same way, by all the players in the meaning process. They modulate the basic denotative code to interpret it. Connotations are numerous: their abundance makes any knowledge system complex. Indeed, it is very difficult to identify them: their processes of generation and transformation have not yet been clearly identified (Piaget and Inhalder, 1966).
This multiplicity of codes (Eco, 1972) should lead us to accept the idea that there can be no bijection between a meaning and the encoding. We are therefore very far from the isomorphism between reality and the model created by the theory of measurement. The uncertainty of the meaning process prevents the treatment of semantic structures by an algorithmic process. Indeed, it becomes inevitable to fall back on a dimension of the functional aspect of semantics which is also cognitive, and this is the role that fuzzy logic should play.
The leading discipline in the domain of the treatment of the semantic dimension is artificial intelligence, through the component which is strongly linked to the cognitive sciences (Ganascia, 1990). Semantic representations of the data therefore form the basis of what the discipline terms the "representation of knowledge". However, we should not forget that artificial intelligence was the cradle of fuzzy logic 19 . It was also in this domain that it made its most important and spectacular developments. The relationships between fuzzy logic and the cognitive sciences are effectively very close. An effective treatment of the problem of transmissionreception of meaning should enable us to obtain sufficient compatibility between the interpretations of each of the individuals to ensure the transfer of knowledge and therefore generate relevant action. However, we should remember that the construction of a fuzzy number is made up of all the various possible cases, to which we attach a membership degree which evaluates the degree of likelihood of the case occurring. In other words, a fuzzy representation is made up of a graduated set of possibilities. It is therefore a representation likely to highlight the margins for interpretation, and which, it seems, should be applied in accordance with the following two cases: • Individual representation: a single individual, in view of the imperfection of the available information will be able to highlight the cases which seem possible to him in accordance with the interpretations he places on this elementary information. By visually preserving all the cases which seem even vaguely possible, the evaluator is reassured of the faculty of the representation (and therefore of the interpretation) he is transmitting to represent reality when this occurs. Inevitably, there is a greater chance that one of the cases forecast using a fuzzy representation will occur than with a traditional representation. • Collective representation: different individuals, possessing the same information, may have different points of view on the same system. The problem which arises is the following: is it a question of real differences in modelling, or can the approximations made by different people as a result of different interpretations, converge within a single model on condition that it visualises the margins for interpretation ? By using a model based on techniques deriving from fuzzy logic, not only do we accept the imperfection of the information at input, but the resulting information which is inevitably imperfect may correspond to the needs of imperfection arising from the taking into account of the margin of interpretation of the cognitive individuals.
Fuzzy logic therefore allows us to design "Representation of knowledge" models, which necessarily belong to the cognitive domain. However, such a change of paradigm generates practical advantages drawn from the field of cognitive psychology.
C. The cognitive advantage of a "representation of knowledge" -based model.

The phenomenon of cognitive dissonance
Cognitive psychology is concerned with the problem of representation, that is to say with the cognitive content of the information processed by an individual (Anderson, 1990). Representations are circumstantial constructions, made in a particular context for specific ends, elaborated in a given situation to enable us to meet prevailing requirements. If the situation changes, or if a new element occurs, then the representation must be modified. During effective use, the structure of a new representation is made by activating knowledge within the working memory, thereby allowing new information to be generated which will eventually be converted into knowledge which is stored in the permanent memory. Finally, understanding means constructing a representation, that is to say elaborating an interpretation which is compatible with the data of the situation, symbolic (statement, text, drawing) or material (physical objects), with the task to be done and with the knowledge in the memory.
In the case of well defined problems, representations should respect the rules of canonical reasoning: indeed, the information available is complete and sufficient to resolve a problem. This is the case of problems involving logic, chess games, etc. However, these modes of reasoning correspond to situations which are extremely rare in reality. We know that managers are mainly confronted with ill-defined problems (Mintzberg, Raisingly and Theoret, 1976). That is to say, they have to deal with imperfect information. These problems are solved by being constantly restructured during a repeated cognitive process of re-interpretation. The mode of reasoning adopted does not therefore follow the precepts of canonical reasoning. The heuristics of the judgements employed to solve the problem involve a simplification which stems from a process of interpretation and generating many cognitive biases, that is to say gaps existing between canonical reasoning and the way individuals really reason (Caverni, Fabre and Gonzalez, 1990).
The process of simplification has been designated by the term "belief structure", (or cognitive schema, or knowledge structure). Some experiments in strategic management have shown that the cognitive process of belief structures came into play to structure the turbulence of the information flows as quickly as possible in order to facilitate the processing of information and decision-taking (Lord and Foti, 1986). Three functions may be attributed to belief structures: • They help individuals to structure, • They help them to interpret and evaluate information by estimating the similarity or the compatibility of the information with existing beliefs, compatible information then being treated differently from non-compatible information, • They mould the affective reactions of individuals to information, by providing them with a framework for positive and negative feelings.
We know that familiarisation with a given information context enables us to reduce the cognitive load by making routine, or automatic, numerous processes in the processing of information. Consequently, belief structures, by privileging automatic modes, effectively allow cognitive economy by making the tasks to be performed easier and faster. But this simplification can lead to a mistaken interpretation. Does the individual not act in a time-lag when applying stereotyped schemata? Psychologists have multiplied their observations on the negative effects of belief structures, some of which are contradictory (Walsh, 1988;Dearborn and Simon, 1958). However, the summaries drawn up reveal the following principle cognitive biases: • Biases of confirmation: individuals privilege information which confirms the propositions, thereby ignoring contradictory, and potentially important, information; • The atmosphere biases: individuals draw conclusions from an overall impression provided by the premises; • The conservatism biases: new information is systematically under-estimated, thereby encouraging stereotyped thinking and inhibiting the creative resolution of problems.
Finally, the existence and activation of belief structures enables information to be processed: • effective when the actual situation corresponds to the belief structure, • ineffective when these conditions are not fulfilled.
The position we are defending is as follows: representation by fuzzy logic enables us to reduce the cognitive biases resulting from the processing of information generated by a "representation of knowledge" -based model.
The cognitive advantage of representation through fuzzy logic An individual who has a representation of imperfect information based on fuzzy logic may represent, not information which gives him an accurate number, but a range of more or less plausible information. In this case, since he is not focusing on one proposition, but on a proposition set, he is less likely to be influenced by confirmation and conservatism biases. More exactly, these biases will cover a value set (the kernel, for example), which means that more propositions will be treated in the same way, thereby decreasing, at the least, the biases between them, as Figure 11 illustrates: source: (Lesage, 1999 b)

Figure 11: Cognitive biases and representations
In both cases, the privileged information (large arrows) is that which confirms the initial proposition and which invalidates the other propositions. In the classic case, a process such as this leads to a discriminate situation, where a single piece of information is heavily privileged, whilst the initial situation might quite as well have been focused on (X-1), for example. In the fuzzy case, where a range of values is privileged, discrimination is less, since (X-1) undergoes the same treatment as X. We have here a classic result of fuzzy logic, one of the advantages of which is robustness: weak variations at input generate weak variations at output. It would therefore seem possible that a representation of management information through fuzzy logic enables us to reduce the effects of cognitive biases by reducing the focus on one piece of particularly privileged information.
The consequences within the framework of a system of transmission of meaning can be illustrated at the level of each of its individuals: Classic Representation • the transmitter: why should the transmitter provide precise information ? On the one hand, he has no decision to take, which naturally would have required clear-cut action. On the other, such information would imply a choice on his part between the various interpretations he considers possible. Not only does he impoverish the knowledge he is transmitting but he risks making a mistake in his interpretation: there is a social cost to pay for this in terms of image, recognition by others, etc. Why should the transmitter pay this price? His role is simply to feed the decoder with knowledge so that the receiver may make a decision. One of the fundamental beliefs of expert systems based on fuzzy logic is that we preserve the imperfection as much as possible in the processing of knowledge, in order to "defuzzify" only when a clear-cut position must be obtained (in the case of action to be taken, for example).
The representation of this knowledge by means of a fuzzy number seems to correspond to the needs of the transmitter: it indicates a fully plausible range, without abandoning less likely possibilities which are nonetheless present in the evaluation of sales forecasts. We may therefore consider that the ambiguity inherent in the process of interpretation is taken into account in the imperfection of the information represented by means of fuzzy logic: the different interpretations not only have more chance of being represented in the various ranges of estimations used by a fuzzy number, but their level of plausibility, evaluated by the value of the membership degree, is transmitted at the same time as the evaluation of the variable in itself. • the coder/decoder: in a classic system of information transmission, this role is allotted to an algorithm. However, if the model using the information transmitted is based in its turn on imperfect knowledge, then a process of interpretation has had to be implemented to select the variables and their interrelation, etc. Fuzzy logic also proves necessary at this level if we are to be able to take into account the necessary margins of interpretation arising from a modelling exercise. Apart from anything else, the use of this mathematical framework allows us to treat representations of varying origins (classic, statistical, or fuzzy), and which possess different margins of interpretation (because they are from different sources and contexts) coherently. • the receiver: unlike the transmitter, the receiver must take a decision and a certain course of action. Let us take two opposing situations concerning the type of representation, one by one.
A. Classic representation: the receiver receives a number representing the situation concerning which he must decide on a course of action. Either he takes this number directly into account and acts in consequence: he therefore serves exactly as an algorithm, treating the syntactical dimension. Or he interprets this information in the light of his own experience, and notably his knowledge of the way this information is constructed: he knows it results from initial knowledge subject to approximation and from a necessarily incomplete model. In this case, he has no way of knowing if his interpretation is close to that of the organisation because this is only represented by a single point. He does not know therefore if his interpretation is situated on the same level of coherence, or if it is a question of another reality. It can be observed that such an interpretation will be largely based on tacit representations that the decision-maker may have difficulty in justifying.
B. Fuzzy representation: we note that in this case, there can be no strict application of an algorithm. The fuzzy representation necessarily generates an interpretation on the part of the decision-maker. At this stage, we can also distinguish between two positions. Either he takes a position which was forecast to a greater or lesser degree by the fuzzy representation. In this case, he has played his role as a decision-maker, in interpreting the knowledge which is provided for him by his organisation. Or he takes a decision outside of the representation. He therefore knowingly places himself outside the collective representation although he knows that it highlights all the differences in preceding interpretations. It therefore seems possible that a confrontation in representations has taken place, which helps to explain the different points of view. Within this framework, the use of a fuzzy representation thus facilitates communication, whether it be directly (easier acceptance of the collective representation), or indirectly (justification easier to ask of an individual representation different from the collective representation).
Finally, use of fuzzy logic in a "representation of knowledge" model should allow us to reduce the cognitive biases thanks to the following elements: • it enables the ambiguity of the interpretation to be represented; • it decreases the social cost of an error in interpretation; • it favours the convergence of individual and collective representations; • it places responsibility for the decision with the effective decision-maker.
These advantages, established theoretically using developments stemming from cognitive psychology, have been highlighted experimentally, and measured within the framework of a business start-up simulation.

Performance and Imperfection
The hypotheses previously put forward were tested in an experimental situation which simulated business start-ups (Lesage, 1999 b). The hypothesis of cognitive ergonomics enabled us to give a coherent interpretation of the results obtained.

A. The experimental protocol
On the basis of a software program simulating a business start-up, subjects were distributed at random between two samples, one "fuzzy", the other "classic". Each subject embodied the founder of a business, whose objective was to maximise the value of his/her business, under the dual pressure of monetary 20 and social 21 incentives. The subject possessed information on the market and the production tool. All the other characteristics (demand, variable cost, fixed charges, behaviour over time) were fixed within the experimental framework, and the subjects' knowledge of these was imperfect. At the beginning of the simulation, they forecast the sales and the value of their companies. They used a different forecasting tool according to the samples: the "classic" sample used a classic Cost-Volume-Profit model and the "fuzzy" sample used a fuzzy Cost-Volume-Profit model. They obtained forecasting curves of the sales and of the value of their company. The curves differed according to the samples. • Fuzzy sample: curve with a greater or lesser degree of possibility.
• Classic sample: curve which follows a classic "trajectory". Over time, the real values were embodied on the projected curves: the subject could therefore see the divergence between the forecasts and reality. The subject had to regularly take three decisions: • The level of investment: investments were left to the initiative of the subject, within the limit of the available funds. The subject could therefore buy a certain number of a single machine, whose characteristics (production, costs) were fixed by the experimenter, and imperfectly known to the subject. • The level of production: the subject influenced the cost structure by deciding on the production rate at three levels: normal activity, over-activity, and under-activity. • The sales price and the quantity sold during transactions made during the sales phases. The subject had to find combinations (price x quantities) to sell his production. The market structure evolved as transactions were made by the subject, in accordance with a mechanism of dynamic equilibrium.
The first two decisions had to be taken in anticipation and they were irreversible. The projected curves therefore served as a support to these decisions. The description of the experimental business model therefore reveals the following elements: • A uni-personnel, mono-product, structure; • 3 types of decisions: investment decision, production rate decision, and sales decision (quantity/price); • An imperfect level of information: some information is approximate, other information is missing; • An objective: to maximise the value of the business; • A difference between the two samples: the representation of the projected curves resulting from a classic cost-volume-profit model or a fuzzy costvolume-profit model.
The experiment therefore consisted of analysing the impact of this difference in representation on the performances obtained.

B. Measures and tests
Two types of measurement were adopted: quantitative measurements calculated by the software program, and qualitative measurements taken by means of questionnaires distributed during the experiment. The most significant measurements are summarised in the source: (Lesage, 1999 b)  These measures were used in statistical tests. They consisted essentially of a value comparison between the two samples, the "classic" sample being considered as the control sample. From a cognitive point of view, a variation in stimulus (projected classic or fuzzy curves) was introduced between the two samples. The objective was to identify the consequences on action (investment, production, and sales decisions), by means of quantitative (mainly the value of the company) and qualitative variables (mainly the perceived uncertainty and confidence).
The three main results are described below: • Better performances were achieved by the subjects in the "fuzzy" sample. The performance gap is up to 30% and are statistically significant. • The average of the forecasts calculated by both tools is identical (overall deviation < 5%) • The feeling of confidence in the face of the imperfection of information is statistically greater amongst the subjects in the "fuzzy" sample.
The results stand as follows: a significant difference in performance and confidence appeared, while the type of tool (fuzzy or classic) did not create any distortion in the elaboration of the projected curves.
The general hypothesis we wanted to test was as follows: the difference in the results comes from the representation of knowledge of the phenomenon. A reduction in uncertainty (and thereby cognitive dissonance) allows the subject to tackle the management problem with greater confidence. Table 6 gives a recapitulation of the statistical results obtained on the main tests carried out 22 : # test Statistical Hypothesis conclusion 1 • The performances of the "fuzzy" sample are higher than those of the "classic" sample. • The performances of the "fuzzy" sample are more coherent than those of the "classic" sample.
not rejected non rejected 2 • The two samples have an identical knowledge level not rejected 3 • The forecasts of the "fuzzy" sample and the "classic" sample have identical averages.
not rejected 4 • The Forecasts-Reality divergences of the "fuzzy" sample are more reliable. • This trend grows over time not rejected not rejected 5 • Divergences in the interpretation of the initial information are weaker for the "fuzzy" sample • This trend increases over time not rejected not rejected 6 • The forecasts are perceived as better by the "fuzzy" sample.
not rejected 7 • The quality of the perceived forecasts and the forecastreality divergence are correlated not rejected for the "classic" sample 8 • Performances are perceived as better by the "fuzzy" sample not rejected 9 • The perceived performances are not correlated with the real performances not rejected source: (Lesage, 1999 b)  We would now like to put forward an interpretation of the overall results. Based on the hypothesis of cognitive ergonomics ("representation modifies action") previously laid down, it remains subject to various limits which we will then explain.

C. Cognitive interpretation of the results obtained
The performances obtained by the fuzzy sample are superior to those obtained by the classic sample. The only difference between the two samples lies in the representation of the projected curves. The interpretation we are putting forward therefore refers to developments linked to the taking into account of the cognitive dimension previously dealt with.
During the forecasting phase, the subject must process the imperfect information to inject it into a data processing system (the forecasting aid software program). The available information is imperfect on the following counts: • The information provided is imperfect ("around", "approximately", etc.); • The information provided is incomplete (we know, for example, the parameters of cost at the time a machine is being installed, and when its cruising rhythm is adopted, but we do not know what happens between these two points in time); • The time available is insufficient for processing information communicated "rationally" (the initial knowledge test shows an average of 6/20, whilst all the answers are on the technical information sheet that the subject has permanently at his disposal).
The beginning of the data processing process may be represented by Shannon's transmission of information schema. At the level where information is incorporated into the information processing process, there are no differences between the fuzzy sample and the classic sample. The levels of knowledge measured by the initial knowledge test are effectively identical (test # 2).
On the other hand, the processing of information by the forecasting program is differentiated by the type of sample. At this stage, the remarks we have made on the impact of the differences in processing are verified by the experiment: • The modelled situation is the same on average (test # 3): processing by fuzzy logic does not produce "better" modelling, that is to say a model which is closer to reality. • The divergence between the subjects in the representations of the interpretations of the initial information are weaker for the fuzzy sample than for the classic sample (test # 5).
Processing by fuzzy logic therefore introduces flexibility by using the differences in interpretation for the evaluation of all the possibilities, without, for all that, modifying the average coherence of the representation. The information outputs (projected curves) the subject obtains at the end of processing are therefore of the type: • "trajectory" showing the classic sample subject an interpretation of the management situation; • "map" showing the fuzzy sample subject one graduated picture of possible interpretations of the management situation.
In the second phase, during which the subject deals with reality, the curves are used as a support to be referred to in the absence of any other objective point of reference (the curves are consulted an average of 12 times during the experiment, that is, once per period simulated). In the course of this real phase, the subject acquires other information thereby modifying his perception of the management situation. He therefore finds himself the receiver of the initial "projected curves" information and uses it while being aware of the information which has not been integrated into the projected model.
The subject is in a situation of "information stress": he has not enough time to process all the available information rationally. In accordance with preceding developments, a cognitive process of the "mental structure" type had to be triggered in order to simplify the problem set. The difference in representation means that the cognitive biases at work make a different impact: • It is more important for the classic sample: the projected curve constitutes a reference which may rapidly become obsolete, • it is less important for the fuzzy sample: the projected curve leaves the subject the possibility of making a mistake while keeping him more or less, in the prescribed position.
In this situation, the following interpretation may be put forward to explain the difference in the final performances: • the flexibility of the representation by fuzzy logic has diminished the risks of major divergence between the "forecast" information and the "reality" information (test # 4); • the subjects of the fuzzy sample therefore had less of a feeling of having made a mistake during the making of forecasts (test # 6). The impact of the social desirability effect 23 is diminished, as well as the phenomenon of cognitive dissonance; • the reduction of cognitive dissonance may be seen in the confidence expressed by the subjects. Indeed, the feeling of satisfaction in relation to the level of performance (test # 8) is superior within the fuzzy sample. Similarly, the confidence expressed in the classification (test # 8) is stronger for the fuzzy sample. • the feeling of confidence thus created allowed subjects of the fuzzy sample to process information better under pressure of very tight deadlines. Their performances are therefore superior to those in the classic sample (test # 1).
By using cognitive phenomena, this description constitutes a possible interpretation of the superior performances obtained by the fuzzy sample. Some elements must be carefully examined before the preceding results may be used.

D. Limits of the proposed interpretation
Two characteristics should put the proposed interpretation into perspective: the area in which the study is valid and an explanation of the cognitive process.
The first point deals with the validity of a study. It may be assessed in accordance with two families of criteria: internal validity and external validity.
• Internal validity is the assurance that the variations of the response variable (or dependent variable) are caused solely by the variations of the explanatory variable (in this case belonging to one of the two samples). Different effects may disturb the conditions under which the data is obtained. The experiment being carried out is not sensitive to effects linked to the time-span (less than 90 mn) and to the influence of outside events (isolation). Besides, possible bias linked to the quality of the measuring instrument (inappropriate questionnaire, the experimenter's behaviour) was studied during a pre-test and reduced during the experiment itself. • External validity concerns the possibilities and limits of extrapolating the results and conclusions of the research to the whole area which was the subject of the experiment. Indeed, the results must be capable of being extended to the population under study. However, a certain number of elements run counter to this. On the one hand, the context of the study, consisting of an experimental situation, neglects a set (largely nonidentifiable) of effects and events which intervene in a real situation. On the other hand, the sole explanatory variable is the use of a fuzzy or classic (forecasting) tool which can only constitute part of the information available for the daily management of a company. Finally, the dependent variables (mainly the company's value) were selected from among a set of indicators used in real conditions.
Finally, the objective is not the extrapolation of the results to real conditions: we chose to carry out an experiment to identify et measure the effects if these were likely to occur. Experimental conditions were therefore chosen to enable us to highlight possible effects, which proved to be the case. We therefore knowingly favoured internal validity over external validity.
The second point involves the explanation of the cognitive process: the psychological mechanisms implemented during the interpretation-action process were considered by the experiment as a sort of "black box". In-depth interviews using experimental cognitive psychology must eventually be carried out to identify the cognitive mechanisms adopted between the reception of the stimulus (forecast-reality divergence) and the consecutive action (taking decisions about investment, production, and sales).
Finally, it seems that the cognitive ergonomic hypothesis ("the representation modifies the action") allows a coherent interpretation of the results obtained. It allows us to identify the origin of the difference in behaviour and of performances obtained from the two samples.

CONCLUSION
We have dealt with two aspects of uncertainty in the field of accounting and controlling: • the creation of models which will accept imperfect information, • the effect of the use of imperfect information for decision making. The first point covers both imperfection in the data and imperfection in the relationships between the data. Indeed, we have seen that imperfect accounting data has led to a radical reappraisal and extension of the principal of doubleentry accounting in order to obtain fuzzified financial statements. At the same time, auditing financial statements also raises the problem of the relevance of a precise evaluation of a judgement. We therefore put forward and tested an audit risk evaluation model based on fuzzy logic which allowed for a linguistic evaluation of the judgement. However, imperfect data cannot be the only dimension of modelled uncertainty in accounting and controlling. Indeed, the relationship between the variables is a major characteristic of management problems. We suggested, for instance, a formal model of financial valuation including synergy by using fuzzy measures. Moreover, the existence of a relationship between two variables of a model may be used to reduce the entropy of the resulting information as we have shown, by constructing an algorithm modelling a simple fuzzy relationship. Finally, the cognitive aspect of the use of imperfect information by managers should not be neglected in a consideration of decision-making. On the one hand, different forms of resistance to ambiguity have been found, which consequently prejudice the effectiveness of the information. But on the other hand, the replacement of the theory of measurement by fuzzy logic modifies the paradigmatic framework of the "management with imperfect information" model. When this becomes "representation of knowledge" and is no longer a "function of data", it seems to introduce behaviour which is less subject to cognitive bias, enabling a more rational treatment of the available information. This work, which originated in accounting and controlling, suggests research possibilities likely to find applications in other management in uncertainty disciplines (cost management, finance, etc.).