Multilevel Governance of the Digital Space: Does a 'Second Rank' Institutional Framework Exist?

Digital Technologies make it possible to decentrally settle institutional frameworks based on self-implementation of exclusive rights of use over information and on the self-regulation of on-line communities. Through a decentralized system of IPRs and collective rules setting of this kind agents would benefit from coordination frames well adapted to their specific needs and preferences. However, such a process can also result in inefficiencies. While becoming subject to exclusion, information and coordination spaces remain non-divisible goods. Moreover, individual and group interests could succeed in taking non-contestable control over "privatized" information spaces. To overcome these weaknesses and threats, an institution of last resort - placed above the agents and the self-regulated communities - should be created and make enforceable constitutional principles with the purpose of guaranteeing some fundamental rights of contents to producers and users. Based on the principle of subsidiarity, it should supervise the behavior of individuals and communities to prevent capture of public wealth by individual interests, to solve conflicts among claims and local regulations, to guarantee enforcement when exclusive rights of use are legitimate. The way to implement it is uncertain, however, since neither a central authority of last resort nor a global community exists to implement it. A combination of open, centralized negotiations among public and private norm setters with a conflict settlement mechanism aimed at harmonizing the proliferating orders could nevertheless lead to the progressive definition of such constitutional basic rights and principles.


Property Rights as a Way to Think Regulations
In any economic space, a set of fundamental rules delineates the rights to use economic resources and allocates these rights to interacting agents. The activity of settling these "rules of the game" played by agents can be qualified as regulation. This broad definition of regulation is useful for at least two purposes. First, it creates a common framework for thinking both so-called "self-regulation" and "State regulation", since it does not refer to the entity responsible for setting up the "rules of the game" for the "players". Indeed, it can either be some exogenous third party -such as the State -or the players themselves that (consciously or not) interact and set collective rules. Second, in the spirit of Ronald Coase (1960), this definition makes it clear that the management of externalities, public goods and other sources of "market failures" is an aspect of the greater activity of organizing the framework in which agents interact and exchange 1 , and corresponds to the notion of a property rights (PRs) system as stated by Yoram Barzel (1989) and Douglass North (1990). By delineating and allocating rights of uses to economic agents, a PRs system establishes the way they can individually or collectively make decision about the uses of resources. In that general understanding, setting a PRs system implies four major activities: setting rules; supervising their enforcement and punishing infringements; settling conflicts, since there are always ambiguities in rules and therefore different interpretations; and implementing decisions mechanisms when rules do not apply, since there is always some incompleteness in a system of rules.
A property right approach to regulation is useful to analyze the way the Internet is governed, because the cyber-world is increasingly considered as a model for a new regulatory regime based on decentralized and State-free regulation often qualified as self-regulation 2 . At first sight, the Internet and Internet based activities have been developing on the basis of governance mechanisms based on contractual agreements or communities' self-regulations. This is due both to the global connectivity provided by the Internet's (end-to-end) architecture and (open) standards -which allow to easily bypass traditional State norms -, and to the coding and tracking capabilities provided by digital technologies -that allow to implement self-enforceable property rights and rules at a (relatively) low cost (see below). This paper is an attempt to analyze the principles of an institutional framework which could be adapted to the regulation of the Internet and related activities. This will lead to an analysis of why some aspects of the coordination of activities should be centrally managed, and why hierarchical principles should be implemented to organize the relationships among regulatory bodies. We will first review the reasons why technology challenges the traditional institutional framework (1). We will then review the contributions of the economics and multi-level governance and of federalism to set up the principles that should inspire the institutional design behind the regulation of the digital world (2). This will lead us to analyze the specificities of the problems raised by the governance of the Internet, highlighting the principle that should inspire an ideal governance architecture (3). Actual governance mechanisms and the design of new ones seem however to drive the system away from this ideal type (4). We will then analyze the available mechanisms that would favor a more satisfying path of evolution (5).

What is new with the Internet?
In the following pages, we will deal with the regulation of both networks (infrastructures and services) 3 and contents 4 . While Internet technologies enable the separation of the management of network services from the management of information services, the strong technical and economic interdependencies between the two call for a simultaneous analysis of their regulation. Today Internet is de facto co-regulated by National Governments -that intervene however without strongly co-coordinating among themselves -by professional entitieswhose competencies overlap and which are not always legitimate -and instances responsible for the technical standardization and management of the system; in particular the ICANN (Internet Corporation for Assigned Names and Numbers; http://www.icann.com/. the IETF (Internet Engineering Task Force; http://www.ietf.org/) and the W3C (World Wide Web Consortium; http://www.w3.org/) -that are very dynamic, but that lack strong institutional roots (See the introductory chapter and section 4 below; see also Eric Brousseau (2001. These various entities contribute to designing rights of use over information flows or network components. 3 The Internet is not a network per se, but a network of networks that relies on common standards and a decentralized network administration. Two types of essential resources ensure the performing of the network. A single addressing system enables any information-processing device (IPD) connected to the network to identify the other IPDs to route the requests and the replies among them. On the Internet, the addressing system is made of two layers. First, a numerical address is allocated to each of the IPDs connected to the network: the Internet Protocol Number. IP Numbers are machine only readable addresses. Second, a "userfriendly" addressing system -the Domain Name System (DNS) -is implemented to allow Internet users to express their request in a language that is closer to natural language. IPDs must be able to use standardized languages to manage both communications among each other and cooperative information handling processes. The Internet is based on the use of two types of standards. The Internet Protocol (IP) is the common communication protocol that makes it possible to manage data flows among IPDs. HyperText Markup Language (html) is the multimedia language that enables any IPD to transform any kind of information (data, sound, image, etc.) into codes that can be "understood" by any other IPD. This is a common programming language that allows heterogeneous devices to inter-operate when processing information. 4 According to the Working Group on Internet Governance working in the frame of the World Summit of the Information Society (see below section 4). "Internet governance is the development and application by Governments, the private sector and civil society, in their respective roles, of shared principles, norms, rules, decision-making procedures, and programmes that shape the evolution and use of the Internet". It encompasses therefore the management of infrastructure and of critical Internet resources (addressing system, communication protocol, standards, etc.. issues related to the management of priority, reliability and security over the Internet (including spam, network security, cybercrime, consumer protection, privacy, etc.. and issues that are relevant to the Internet, but with impact much wider than the Internet, which include competition policy, regulation of ecommerce and e-business, intellectual property rights (IPRs) and also freedom of expression, freedom of the press, multi-linguism and multi-culturalism; these latter dimensions being highlighted in particular by many of the stakeholders involved in the WSIS process. It proves, nevertheless, that it is quite impossible to disentangle the management of content and container since the regulation of the latter aspects determine how agents can be empowered or not to manage contents.
The fuzzy set that regulates the Internet draws from a recent, but rich, history during which computer scientists working within US governmental agencies progressively established the technical principles that govern the management of a decentralized network. This effort was then transferred to private initiatives when the Internet became commercial and open to private investments (Barry Leiner et alii, 2000). Earlier and later, regulation initiatives bypassed the traditional intergovernmental mechanisms of international standardization and regulation (Eric Brousseau, 2001. Four main reasons explain this. First, the velocity of innovation in both digital networks and multi-media technologies was quite incompatible with the slowness of these international or intergovernmental agencies. Second, until 1998, Internet was essentially a US network, and it is still dominated by US players today. Third, the liberal ideology of, respectively, the inventors and the entrepreneurs of the Internet, explains their mistrust in international or intergovernmental bureaucracies. Fourth, technology enabled to implement regulatory principles on a decentralized basis. The present institutional framework is problematic for at least two reasons. First, it is partly inefficient in the sense that there is incompleteness, conflicts, and defaults in enforcement in the set of implemented rules. Second, the current processes used to establish these rules do not guarantee that the interests of all the stakeholders are fairly taken into account. It is therefore necessary to investigate what the basic principles of the regulation of the cyber-world should be. Indeed, many specialists advocate that, beyond a common minimal technical regulationthe publication of open standards and a transparent management of the addressing systemthe Internet and its uses should be decentrally self-regulated. The combination of an abundance of essential resources, strong competition among information and network service providers, and the ability to decentrally configure the services supported by the network according to the preferences of the users is supposed to allow adaptation to each and everybody's preferences, without fear of conflicting uses and capture (as summed up by Brett Frishcman, 2000 andNiva Elkin-Koren andEli Salzberger, 2000).
Before discussing this view, let us remind the reader why the Internet and digital technologies profoundly change the economics of regulation and more generally the economics of institutional frameworks. Following a New Institutional approach, it is indeed useful to consider institutional frameworks as "social technologies" which impact on economic efficiency, both because they have to be considered as a production means and because they are costly to produce. A cost/benefit analysis has therefore to be applied to institutional frameworks.
According to Yoram Barzel (1989) and Douglass North (1990) any institutional framework can be analyzed as a Property Rights (PRs) system, the latter notion referring to a set of rules and mechanisms that delineates rights of use over economic resources and allocates them to decisions makers so as to enable them to take economic actions 5 . A PRs system is based, first, on a delineation (measure) of these rights of use -consisting in establishing the frontiers between different ways of using resources and among regimes for appropriating the output of these uses -and on a process of allocation of these rights -that are granted to individuals or groups -, which together generate measurement costs. Second, enforcement mechanisms implement these rights of use by excluding every unentitled agent from access to the protected resources, or from capturing the output of its use. This refers to controlling access, supervising uses, granting authorization for uses and punishing unauthorized uses (either to get damages or to dissuade potential infringers) and generates enforcement costs.
Digital technologies and the Internet architecture have an essential impact on both measurement and enforcement operations. Indeed, due to the decreasing cost of information processing, and to the increasing capabilities of Information and Communication Technologies (ICTs) and to the decentralized management of the Internet, individual agents have access to capabilities that allow them to individually implement property rights (A) and to set-up self-enforcing collective rules (B) at a much lower costs than before. Moreover, the efficiency and the credibility of traditional regulatory frames are challenged by digital technologies (C).

A) Coding and Tracking as a way to decentrally design and enforce property rights
Digital technologies allow to implement a self-enforcing system of property rights over information goods and services (Lawrence Lessig, 1999). Indeed any set of information that is codifiable in a computer can be either encrypted to control ex-ante its uses (code of access) or easily, instantaneously and cheaply tracked to control ex-post how it has been used. Moreover, digital technologies allow to implement, at very low cost, customized conditions of exchanges over contents, since contracts governing information exchanges are made selfenforceable through digital codes. Consequently, agents can tailor the conditions of exchanges of intangibles to the specificities of the exchange and of the parties. This results in a more "decentralized" setting of property rights as could be deduced from the frameworks of Yoram Barzel (1989) and Douglass North (1990).
As analyzed in Eric Brousseau (2001 digital technologies provide content creators and network operators with means to de facto self-claim and implement property rights by enabling them to control access to their resources. They also make it possible to finely tune at reasonable costs the transfer of these rights of access and of uses among economic agents. Decentralization creates a more efficient property rights system, since any centrally settled rules never perfectly fit to the local individual needs and constraints, resulting in maladaptation costs borne by the users of the rule (see section 2, and the analyses of Yoram Barzel, 1989;Douglass North, 1990;Eric Brousseau and Emmanuel Raynaud, 2005).

B) Information spaces as a way of implementing self regulations
Not only individual property rights can be individually measured and enforced, but also collective regulations can be more decentrally designed and enforced. Digital networks relying on the end-to-end principle allows creating "information spaces" and controlling access to them. These information spaces can be of very different natures: mailing lists, secured websites, forums, intra and extranets, etc. In each case, some individual or entity is in charge of managing the list of subscribers who can access the common space. This entity therefore controls inclusion-in/exclusion-from the resulting on-line community aimed at sharing resources or at interacting within the closed space. Control of access implements a collective regulation since those who do not comply are denied from accessing the information space. It is essential to point out that the collective regulation can of course concern the use of the network or access to contents. However, it can also organize more broadly the interactions among the members of the on-line community. If threatened with exclusion from the information space, individuals are likely to comply to on-line set regulations, even if their interactions are partly performed off-line (such as in the case of the transaction of a tangible good).
Thanks to this ability, information technologies considerably reduce the costs of implementing self-regulations. Well known by historians and anthropologists ( e.g. Lisa Bernstein, 1992Bernstein, , 1996Robert Cooter, 1994Mark Granovetter, 1985). self-designed and self-enforced regulations allow communities to implement collective order either to organize collective action or to solve coordination difficulties. However, the efficiency of self-regulations decreases when communities become larger and more diverse. Indeed, each infringer is less visible because information circulates less efficiently. Moreover, each member of the community has fewer incentives to practice retaliations since they are costly and since a member's individual action is less visible (Paul Milgrom et alii, 1990). Digital technologies facilitate the implementation of collective rules because they enable to track individual behavior more easily, because rules can be implemented into the code, because the ultimate retaliation -exclusion from the community -is technically easy and cheap to manage.

C) How ICTs Challenge Traditional Institutional frameworks
While digital technologies favor the implementation of self-regulation, digital networks make traditional institutional frameworks less effective. The Internet is a-territorial by nature, while traditional public regulations are implemented on a territorial basis. The Internet's interconnectivity is the basis of its ability to support the sharing of communication functions and contents. Its decentralization guarantees its reliability, its efficiency and its ability to develop. The Internet is therefore the medium of a worldwide connectivity that overcomes existing regulations based on territorial jurisdiction and Government legitimacy. Two evolutions come into play. First, digital networks allow bypassing -Nation-State based regulatory frameworks. Second, the legitimacy (and the efficiency) of these previous frameworks is called into question due to the properties of the new technological infrastructure. These evolutions concern both the rights to access and use information and networks, and the norms used by agents to coordinate many dimensions of social interactions, and in particular those that organize the use and exchange of economic resources.
First, any legislation can be bypassed through the Internet because no governmental agency would be able to efficiently supervise the exchanges of information among Internet users under their jurisdiction and between them and foreign third parties to guarantee the enforcement of existing laws. Moreover, these exchanges can be faked, and the potential infringers of the law can use a wide set of technical means to perform operations that would be forbidden by some technical means operated by the State 6 . The generalized interconnectivity as well as the possibility to break codes, and reciprocally to strongly encrypt limits the ability of Governments to control network-based activities 7 . In practice, this ability to bypass traditional public regulations does not result in massive development of illegal activities in all domains; even if cyber-crime is developing, and if some practices like files sharing raise major concerns in some domains, such as cultural industries. However, it grants many stakeholders with a bargaining power that enables them to claim for the reshaping of a lot of legislation either because they consider the existing legislation as no longer tailored to the problems they are supposed to solve, or because existing national orders create a handicap in the global competition taking place in the digital space. It results in a direct confrontation and a unchecked competition among norms in many domains, which is reinforced by the ability of services providers and contents providers to locate their information processing devices in territories where the norms that are the most in accordance with their preferences apply 8 .
Second, the legitimacy and the efficiency of traditional regulations are also in question. Some features of traditional institutional frameworks are no longer justified because the new technological infrastructure renews the economics of information and many of the optimal tradeoffs behind the current regulatory framework. In some cases it is the principle of State regulation which is challenged per se, in others, it is the way public intervention was designed that is no longer relevant because the problems it addressed have changed with the new technologies. Let us quote two sets of examples.
First, the traditional governmental intervention in the design of intellectual property rights (IPRs) systems was largely justified because physical constraint was, in the last resort, the only way to really prevent unentitled agents from accessing and using information. Producers of knowledge and information had to rely on public authorities acting as their agents to guarantee the effectiveness of their exclusive rights of use. The resulting design of a unified regime of IPRs, whatever the technical domain and the industry, had, however, strong maladapation costs (Eric Brousseau and Emmanuel Raynaud, 2005;Eric Brousseau and Christian Bessy, 2005). Indeed, to be implemented by a central government, a rule -for instance, of duration and scope of protection -has to be generic to be applied to a wide set of situations, resulting in lack of effectiveness or efficiency in many specific situations (the optimal delay of disclosure being for instance different in a rapidly evolving and in stable technological domains). Thanks to the ability to implement self-enforced IPRs, digital technologies allow content creators and innovators to decide on the optimal rules of diffusion/protection to be applied given the specificity of what they produce, the technological and competitive context, and the business model they want to implement.
In addition, IPRs were designed at the national level because for a long time it was the relevant economic space, and because an authority of last resort was available at this level only. Today the economic space is increasingly global and the same is true for the information space. In many respects it is meaningless to guarantee exclusive rights of use over information in some territories and not in others. Since digital technologies enable to make effective exclusive rights of uses wherever the information is used, the usefulness of national IPRs systems decreases.
Second, the new technological context affects the logic of the design of former rules. For instance, many regulations concerning the diffusion of information (e.g. restrictions or banning of certain contents, organization of specific distribution channels, etc.) were justified because traditional broadcasting technologies -in particular hertzian analog radio and TVmade it impossible to control access to information. Because they allow controlling access to contents on a customized basis (i.e. user by user) digital technologies turn many traditional regulations restricting publication rights and channeling contents into obsolete tools.
In the same spirit, digital networks overcome the traditional boundaries and the implementability of communication (one-to-one) versus broadcasting (one-to-many) regulations. Most information services are hybrids between these two extremes, even, inventing new logics for sharing and exchanging information.
The former national regulations are no longer relevant because agents now have the possibility of decentrally implementing more efficient orders established within communities characterized by common practices or preferences rather than by location or jurisdiction. Is a new institutional framework needed then? Some authors argue that information and digital resources are almost pure indivisible and non-rival goods that render the implementation of common rules useless since these rules will be spontaneously and decentrally produced by agents and communities implementing self-enforcing property rights and rules. If a resource does not cause conflicting claims among its potential users, scarcity does not arise and there are no economic problems. There are however generic (2) and specific (3) arguments to justify some central coordination of the rule making and enforcement process in the digital world.

The Economics of Multi-Level Governance
To better understand how rule setting can be optimally organized it is useful to rely on the economics of constitutional design and multilevel governance which seeks to clarify at what "level" of coordination different types of coordination problems should be solved, so as to understand the optimal way of organizing an institutional framework 9 . 9 As pointed out by Liebet Hooghe and Garry Marks [2001], the notion of multi-level governance -together with others such as multi-tiered governance, polycentric governance, multi-perspectival governance, FOCJ (functional, overlapping, competing jurisdictions, etc. -seeks to describe how governance has been changing in western societies. All refer to the dispersion of authority away from central government, upwards to the supranational level, downwards to subnational jurisdictions, and sideways to public/private networks. Two bodies of economics literature have investigated these notions in particular. Neoclassical political economists and public choice theorists (e.g. Elinor Ostrom and James Walker, 1997; James Rosenau, 2001) insist on the idea that governance results from the setting of dispersed self-rule on the part of diverse voluntary groups that overlap and interact in a complex way among each other and with imperfect markets and imperfect public-interest seeking institutions. Theorists of (fiscal) federalism have gone from studies focused primarily on formal constitutional federations to a costs/benefits analysis of centralization vs. decentralization of authority (e.g. The New-Institutional Approach to Property Rights provided by Yoram Barzel (1989) and Douglass North (1990) is a useful framework to deal with this issue. In a given group -let us say a Nation -the measure and the enforcement of property rights can be performed either centrally by an authority of last resort -generally the State, that benefits from the monopoly of the legitimate violence -or decentrally by the agents. In the former case, the Government defines for each set of economic resources the rights that can be associated to them (e.g. usus, fructus and abusus) and maintains a cadastre in which each of these rights over any resource is recognized for individuals or groups. Then the government sets up and operates an enforcement mechanism to expel any unentitled agents from the protected use of these resources. It can be an ex-ante mechanism -e.g. a guard or an encryption mechanism that forbids access -or an ex-post mechanism that assesses violation and punishes infringers.
The alternative is to have the property rights self-delineated and self-enforced by agents. In the latter case, individuals (or groups) claim for exclusiveness of usage, and they use available means (and in the last resort, violence) to have their claims enforced by third parties.
The NIE approach to property rights analyzes the advantages and disadvantages of centralization and decentralization (Yoram Barzel, 1989;Douglass North, 1990;Eric Brousseau and Mhand Fares, 2000;Eric Brousseau and Emmanuel Raynaud, 2005; see also Christian Bessy, 2005, in the specific case of IPRs). It leads to the idea that establishing a PRs system, either fully centrally or fully decentrally, would be inefficient in the sense that the costs of setting up a complete PRs system 10 would be too high as compared to the benefit agents would get in being able to use resources, invest in the genesis of production capabilities, and to organize trade. Any property rights system results therefore from a trade-off between the advantages of centralization and decentralization.
In a recent paper Eric Brousseau and Emmanuel Raynaud (2005) point out the main "factors" playing a role in the tradeoff enabling to identify the optimal level/mode of coordination to solve a given coordination problem in a specific context. On the one hand, centralization provides agents with (i) scale and scope effects (ii) learning and specialization benefits, and (iii) means to reduce losses of collective welfare (by allowing an increased consistency among local rules, an internalization of externalities, the genesis of positive network effects due to the use of common standards of interactions). Indeed, the entity in charge of centrally designing a rule for a community will take into account the interdependencies among agents and the net benefit of alternative rules at the collective level, while decentralized negotiations could fail to consider these elements if transaction costs are not zero, if information asymmetries arise, if property rights are incomplete, etc. On the other hand, centralization generates inefficiencies due to: (a) static mal-adaptation 11 (linked to the increasing Wallace Oates, 1972. heterogeneity of individual preferences); (b) dynamic mal-adaptation (due to the reduced renegotiability of collective rules and therefore the difficulty to adapt compromises to new circumstances); (c) to the increasing enforcement requirements (since there are increasing incentives to free ride due to the rising maladaptation costs inherent to a larger, less homogeneous, community; and (d) to the rising private capture of coordination. Indeed, when an order is both general and centrally designed, the interests that can influence the design of the order have strong incentives to distort collective governance in their favor. First, it is profitable, since the order applies to a wide set of agents. Second, it is rationale because it is difficult for agents to escape from this rent extraction (since the order is general, agents have few exit options).
This tradeoff that is inherent to the solving of any coordination problem makes it clear that there is no "optimal" rule making solution/level that cancels the cost of coordinating the use economic resources. Agents therefore play on complementarities between levels/modes of coordination to try to minimize the coordination costs they bear (and maximize their efficiency in using and exchanging economics resources).
First, agents have to organize a "division of labor" among the various tools they can rely on to coordinate: general institutions, community ones and contracts. To do so, they divide actual coordination problems into several smaller ones to be solved by the level/mode of coordination that minimizes their private costs. Here a principle of "subsidiarity" applies. The relevant level of coordination has to fit with the "generality" of the addressed issue that can concern a pair of agents, a subset of the population or the whole population. Agents build complementary contractual arrangements, self-regulations and general institutions to solve the various dimensions of their coordination problems in relation to the optimal centralization/decentralization tradeoff for each of these dimensions.
Second, each level/mode of governance potentially generates inefficiencies.
Complementarities among them should also be considered in terms of interactions. A "check and balance" principle has to be managed between the various level of governance, and in particular between general and mandatory institutions and local (or specialized) and voluntary ones 12 . The former -qualified as public institutions in our framework -should (i) control for monopoly capture by the latter -qualified as private institutions -, and (ii) be used to reinforce the bounded enforcing capabilities of private institutions since the latter are sometimes unable to implement and enforce an efficient private order (which, usually, cannot rely on violence to incite agents to comply). On the other hand, private institutions and governance mechanisms are useful in (a) allowing innovation in the institutional framework, and in (b) limiting the discretionary power of those in control of public institutions. Their ability to bypass and even overcome the public order, constrains public institutions not to be overly inefficient.
These elements call for a multilevel governance of any coordination problem, which should fit perfectly to its particular needs and individual preferences. 12 In our framework, public institutions are the results of an evolutionary process of extension of the scope and rigidification of contractual arrangements, then collective self regulations, that start by being applyied to small groups and specific coordination issues to expand to wider groups and to the solving of more generic coordination problems. By reference to the notion of Institutional Framework defined by Yoram Barzel [1989] and Douglass North [1990], Public institutions, are mandatory -agents do not choose to enforce the related rules -and general -the associated rules apply to wide and heterogeneous sets of situations and agents. Private institutions are voluntary and specialized.
apply in the digital world as well. One of the characteristics of any multilevel governance is that a last resort regulating entity should overhang all the norm setting entities; just as a Supreme Court does in a constitutional State. The role of such a last resort entity is to enhance the efficiency of the global order resulting from the decentralized production of norms. It is in charge of avoiding incompatibilities among norms, and maximizing positive network externalities among them, as well as avoiding the capture of norms by individual or groups seeking to exercise dominance. It is also responsible for guaranteeing the enforcement of locally set orders as long as they contribute to collective efficiency 13 . At the national level, the (federal) State, and in particular the Supreme Court, are these last resort regulators placed above local government orders and private orders setters. At the international level, due to the increasing globalization of the economy, but also of many other aspects of collective problems -in particular environmental and security ones -, there is a necessity for the emergence of such last resort regulating entities responsible for harmonizing and controlling the interacting orders resulting form the decentralized initiatives of national public systems (i.e. national States) and private entities that set international orders in many domains: business, technology, culture and politics.

Institutional Stakes in the Digital Sphere
While strong arguments call for a decentralized settlement of IPRs and collective rules in the digital world (section 1) we have just reviewed the general arguments (section 2) mitigating this initial view. A certain degree of centralization is useful. This general statement is reinforced in the digital world because digital networks and digital activities impact on the provision of two categories of essential resources. First, the IPRs regime and the collective rules organizing access and use of information goods have a direct influence on the efficiency with which information and knowledge can be used, shared and produced. Second, digital networks are per se resources allowing coordination among agents. There are two levels of understanding for this assertion. Digital networks are communication tools allowing agents to contact each other, exchange information etc. In addition, end-to-end networks provide agents with the capability of creating private orders to manage access-to and use-of information and coordination resources. Thus, the regulation of digital networks affects both the ability to collectively produce and use information and knowledge, and the capability of producing/using relevant and efficient inter-individual coordination platforms 14 . It is essential to assess whether a fully decentralized production of the institutional frameworks regulating the access and use to these essential resources would be efficient.
We will start by pointing out that the efficiency of measurement (A) and enforcement (B) operations could be enhanced thanks to some centralization, whereas full decentralization would lead to inefficiencies. Then we will show that these arguments are reinforced by the sustainability of monopolies in the digital world (C). 13 There is therefore convergence of interests between central and local norm setters. The central norms setter can facilitate the role of local norm setters in strengthening their ability to implement their norms. The local norm setters can, in turn, accept the constraints imposed by the last resort regulator in exchange of this support. Such potential bargain allows both levels of regulation to reinforce each other, since the local regulators recognize the legitimacy of the (collective) constraints imposed by the last order regulator, while the latter recognizes the contribution of the local norm setters to the implementation of an efficient general order.
14 We give a very broad understanding to the notion of coordination platform. It stands for a coordination solution provided to a community by a system of common rules. It is not restricted therefore to a technical digital platform.

A. Can measurement operations be totally decentralized?
The way according to which property rights and collective rules are set on digital networks affects the efficiency of the use of rival resources and non-rival resources. Eric  details why, in both cases, full decentralization generates inefficiencies 15 . We will concentrate here and in the following section on the most strategic issue: non-rival resources.
Concerning non-rival resources, the limit to the decentralized setting of property rights is an excessive protection of these resources, forbidding access to many potential users, while the resource is non-rival. On the one hand, it is today possible to oneself claim for exclusive right of use over digitized information and knowledge (e.g. embedded in software) simply by encrypting it. It is also possible to control access to information spaces providing coordination resources. On the other hand, both type of resources are non-rival (if there is no congestion effect within the information space, in the latter case) and there are important collective benefits to be gained from organizing free access to these resources.
In the case of information and knowledge, there are spillovers. Maximizing diffusion enables to maximize the production of new information and new knowledge because of the possible recombination of the existing stock, and because the ability to use a specific set of information or knowledge often depends upon the access to complementary information. A central system guaranteeing some fundamental rights of access to information should be able to guarantee the exploitation of these externalities.
In the case of coordination platforms, the ability to create customized and efficient coordination means relies on access to an open platform on which tools to create those means are available. Moreover, the openness of coordination platforms maximizes for each member the number of potential counterparts in the exchange (whether exchange is of a market type or not). It then reinforces the chances of any transactor to get in touch with the right counterpart. It reduces also the ability of those participating in a platform to capture monopoly rents on other members, and incite them to enhance efficiency, and reinforces the division of labor. Again, a central system guaranteeing the openness of (generic and more specific 16 ) coordination platforms is essential to maximize the likelihood of exploiting these externalities. 15 As far as measurement operation resources are concerned, the main problem with rival resources is the way the initial endowment of property rights is performed. It can generate non-neutral wealth distribution effects. It can also lead to illegitimate capture when the rights granted on digital networks impact on rights that were granted outside of them. As far as it concerns enforcement, full decentralization means a loss of benefit from potential efficiency gains linked to centralization; in particular the ability to manage externalities and the ability to benefit from economies of scale and scope (see and also Eric Brousseau and Emmanuel Raynaud (2005) or the previous section in this chapter that recalls the general trade-off between the two way of setting orders). 16 There are indeed two issues at stake. First, the openness of the network infrastructure and services in which these platforms are grounded is essential. Put another way, the Internet infrastructure, architecture and essential It has to be pointed out that in the former institutional frameworks organized at the national level, the Government often acted to limit private capture of resources considered as essential facilities. Think for instance of the IPRs system with the obligation of disclosure and with the process of transferring private property to the public domain with the passing of time. Think also of the competition policies aimed at guaranteeing access to exchange platforms.
These arguments call for some centrality, both to select the collectively optimal system, and to compensate those who are harmed by a system in which PRs are bounded as compared to what they would get if a system of unlimited rights to control access and uses prevailed.

B. The limits to a fully decentralized enforcement
Assuming that the above mentioned measurement problems have been solved, a too decentralized enforcement of PRs and self-regulation could lead to an inefficient protection of non-rival resources and generally speaking to an overly weak enforcement of private orders, depriving agents respectively from an efficient protection of their exclusive rights of use (or of remuneration) and of the benefits of relying on decentrally set-up coordination means.
Self-enforcement in the digital world relies on two pillars: code and control of access to information spaces. However, none of these methods guarantees a perfect enforcement. First, no cryptographic system is inviolable. Code based protection is therefore imperfect. Second, the enforceability of norms founding virtual communities is also in question. The power of coordinators of virtual communities is obviously bounded by the ability of Internet users to access alternative communities providing them with the same type of service (or an alternative one they prefer) and by the ability of the providers of means of access to identify the users (because the only identity that is certain over the Internet is that of the computers). This calls for an entity able to guarantee enforcement in the last resort by being able to punish infringers. In this case, centralization of enforcement is justified for two reasons 17 . There is obviously a need for some exercise of constraint in the last resort. Indeed, even if a self or local regulatory mechanism can rely on its ability to exclude infringers, its credibility is bounded by the cost borne by the infringers in case of exclusion. If this cost is inferior to the benefit an agent can earn by violating the local regulation, the self-enforcement mechanism will be unable to prevent (major) infringements. A last resort enforcement device able to increase the costs of violating the local regulation by implementing additional retaliation would reinforce the enforceability of local regulation (see Paul Milgrom et alii, 1990;Mark Lemley 1999;Eric Brousseau and Mhand Fares, 2000;Eric Brousseau and Emmanuel Raynaud 2005). In addition, a central mechanism granted with power of constraint could guarantee a minimum transparency to control how the closed information spaces are actually used. Indeed, a virtual community can be organized to infringe the rights or break the rules established by another on-line community. More generally, if agents agree on principles that allow them at a collective level to maximize efficiency -e.g. by optimally dealing with the protection/diffusion dilemma (see Stanley Besen and Leo Raskind, 1991) -, some of them could use ITs and digital networks to create sub-spaces where these common rules would be violated. These sub-spaces could be of very different types: P2P communities to bypass the right of remuneration of creators of content, market places to by-pass regulations of market exchanges, private networks built on the digital infrastructure to manage communication flow a different way (e.g. to manage priority for voice or video streams). Bypassing collective rules can ruin their sustainability. This is true for whatever the type of rules ensuring the provision of a public good to a community, and whatever this public good is: an open access platform, open access information or knowledge. For instance, collective agreement on remuneration rights to be allocated to the creators or inventors could be hindered by the development of communities faking their sharing of contents 18 . In other cases, the sharing of information within OSS communities could be discouraged by private capture of knowledge and the associated wealth and reputation by some free riders. Some possibility to have collective rules enforced in the last resort is therefore needed to avoid phenomena comparable to the tragedy of the commons, the commons being in that case the collective rules that "optimally" solve the protection/open access dilemma in the provision of public good.

C. Sustainable monopolies in the digital era
The problems raised by a full decentralization of the settlement of property rights and regulation are reinforced by the long-term sustainability of monopolies in the digital world. First, fixed costs make monopolies stable and viable in the digital world (Carl Shapiro and Hal Varian, 1999;Thomas Noe and Geoffrey Parker, 2000). Second, network externalities are also strong drivers of monopoly dominance through the possible privatization of (interface and interoperation) standards. Network effects push for interoperability among the different components of the digital sphere. Players are incited to take control of the interface and interoperability standards because they are essential means of controlling access to various markets. They therefore play on increasing returns of adoption and viral contagion to reinforce their ability to control the long-term evolution of standards and therefore of market structures 19 . The combination of the two allows some players or some coalitions to gain control over the ability to settle the basis of property rights systems and collective coordination means, so as to have their interests prevailing without really allowing alternative interests to compete on a fair basis, thus harming collective welfare.
We explained above that a PRs system based on self-claims and self-enforcement could lead to an excessive private capture of non-rival resources as information and knowledge or as coordination platforms. Producers of these public goods will indeed be incited to fully and endlessly restrict access, while collective efficiency calls for limited exclusion in the short run and full openness in the long run, to allow creators/producers to be remunerated while maximizing diffusion and access. The impact of digital technologies on market structure makes this over-restriction of access to information, knowledge and coordination means sustainable in the long run, leading to an under-optimal availability of these public goods both in the short and long run. Never-ending restricted access to public goods would have strong redistribution effects since some players would be able to endlessly capture rents. Second, it can have strong negative collective effect such as hindering innovation (because of restricted entry and because of reduced spill-over effects) and fragmenting the open-platform of coordination, resulting in making valuable deals impossible.
These threats create the necessity of binding the abilities of economic agents to restrict access to contents and coordination platforms. This induces the limiting of the encryption capacities of agents (e.g. mandatory registration of code keys to trustworthy third parties) to maintain a minimal level of transparency aimed at enabling supervision by some antitrust authority. Moreover, reducing encryption capabilities limits de facto the levels of barriers to entry and therefore the strength of monopoly power. More generally, self-enforcement has to be supervised by some last resort authority to ensure that encryption and self-regulation are not combined to develop and exercise monopoly power, and to guarantee open competition in the long run 20 . Indeed competition is the best solution to provide agents and communities with incentives to implement efficient solutions.
The openness of coordination platforms should also be maintained. We mentioned above that the end-to-end connectivity provides the agents with the ability to implement self-regulations in the digital world. It , for instance, grants Internet Services Providers (ISPs) the ability to manage their network the way they want, including by implementing specific standards, specific addressing systems, etc. Depending on whether the ISP complies or not with Internet standards and with Internet management principles, the ISPs make the network part of the Internet or a specific network, connected to the Internet by a gateway controlled by the ISP. The latter solution aimed at increasing (local) quality -e.g. by managing priorities within digital flows or by reducing the scarcities of IP addresses -strongly decreases the transparency and the reliability of the network (because communication protocols becomes partly incompatible, because the addressing system is then composed of various non transparent layers, etc.). Moreover, it gives a wide power of control to ISPs because the network is no longer of an end-to-end type. In the case of the generalization of such a behavior, the automatic interoperability among networks would disappear and ISPs would have to negotiate interconnection agreements and manage gateways among their networks. This would lead them to define what their users can do when they are interacting with users of other networks (by authorizing or forbidding various practices). Put another way, with the collapse of the end-to-end principle, network operators would become able to control the information service provision on their networks. They would therefore be tempted to adopt strategies aimed at decreasing the competitive advantages of their competitors (either by forbidding access and downgrading the quality/price ratio of competitors" services, or by providing exclusive services on their own network). Such strategies would lead in the long run to the emergence of un-contestable monopolies, but they would also lead in the short term into a decreasing ability among information and coordination service providers to market their services on the global infrastructure (with unavoidable consequences on the diversity and on 20 In that respect, it has to be pointed out that there is a strong transparency-security dilemma over the Internet. On the one hand, the long-term sustainability of the competitive process in information networks calls for a minimum level of transparency. This is essential to enable users to compare alternative supply conditions. This is also crucial for supervising potential anti-competitive behavior. On the other hand, the protection of contents (both the privacy of information exchanges and property rights) leads to encryption. This raises complex problems, because even if it is not justified to broadcast publicly the content of all information exchanges, it is necessary to verify that information exchanges are not harmful for the collectivity as could be the case if they were aimed at settling collusive agreements, infringing intellectual property rights or performing criminal activities.
the price benefiting the final users, since providers will have to write off the fixed cost of the service provision on a reduced audience). This would have strong distribution effects and would harm innovation. Thus, without constitutional principles guaranteeing the protection of some fundamental rights to the users of the Internet and without antitrust policies aimed at maintaining a sufficient level of contestability, public goods that are provided on an open basis may simply disappear; which is a fundamental concern when the public goods are the infrastructure of which additional public goods are provided (together with private goods).
Through this example, it is clear that it is essential to guarantee in the digital economy the end-to-end character of the Internet that is at the heart of its reliability and flexibility. More generally, there is a tension between the individual ability to implement local specialized orders and the necessity to preserve some common coordination means. It is therefore obvious that the latter calls for the binding of the abilities of local regulators to implement their preferred orders.

Optimal vs. Actual Governance Mechanisms
To sum up, digital technologies challenge the efficiency of the existing institutional framework, based on interactions among public and private norm setters under the control of public national authorities. ICTs empower all kind of communities by providing them with tools allowing to implement, at relatively low costs, collective orders which can be built behind, beside, or above the existing public orders built by national States. One of the interests of the Internet is precisely its ability to structure communities emancipated from previous institutional frameworks or geographical constraints (however it does not mean that communities defined on pre-existing jurisdictional or geographical bases are not relevant when it comes to regulating located practices and problems). The decrease of enforcement costs allows building self-enforceable regulations at a larger scale than before, whose sole boundary is the minimal consistency of communities (whose members should share values and preferences). The second main advantage of the Internet and related technologies is that codes allow a strong customization of the management of rights of access and use, reducing maladaptation costs borne by agents in the pre-existing institutional frameworks. Finally, yet importantly, the ability of individuals and communities to self-organize and to design innovating coordination processes is a strong source of technical, organizational and institutional innovation. A coordination of these practices is nevertheless unavoidable to guarantee the efficiency, consistency and sustainability of this ability to implement innovative coordination solutions adapted to economic agents' needs.
A full decentralization of the design of the regulatory framework would lead to an excessive privatization of information and coordination platforms that would be detrimental to collective welfare because it would deprive society of access to non-rival resources. In addition, the fragmentation of the Internet and of the information space would reduce opportunities to coordinate efficiently. This calls for a last resort authority with the purpose of bounding the power of norm setters to force them to provide access to their resources (disclosure rules, open access obligations, bounded encryption capabilities, etc.). At the same time, the last resort enforcer should guarantee the norm setters a reinforced enforceability of the rules they set (as long as they do not infringe any superior constitutional principles). Digital enforcement mechanisms being subject to bypass, especially if encryption capabilities are bounded, local norms could be subject to an excessively high violation rate, ruining communities' abilities to implement efficient collective orders adapted to various specific situations and preferences.
It should be pointed out that the questions raised by the Internet are not only linked to the fact that digital technologies upset the traditional trade-off between self-implemented orders and State's authority based orders. It also has to do with the global nature of the end-to-end network, which means that there is no last resort arbitrator. What is at stake is not only fixing the problems raised by the proliferation of self-regulations. More generally, it means dealing with the proliferation of orders. While the technology is empowering private norm setters, the orders built by National States are still in force. First, as long as the interactions that are regulated on digital networks still have a material and therefore a located dimension, National States can affect them. Second, even for fully digitized operations, National States can try to force the users of the Internet to enforce their legislation. They try to do so in particular by threatening to sue Internet Services Providers if the latter do not make National legislation enforceable for the local users. Moreover, for many citizens, National States remain essential legitimate regulators, expected to provide them with security and protection of some fundamental rights (and at least defense against economic dominance). In addition, while national governments often fail to coordinate, because they have conflicting interests, convergences may exist. In such cases -like network security -National States really have an ability to impose an order 21 . Consequently, while the initiators of private orders can bypass the traditional providers of public orders, the former still have the ability to implement some orders and to influence the private orders that eventually attempt to bypass them. It remains true, however, that this ability is bounded. The current situation is therefore neither characterized by the disappearance of public regulations, nor by the perpetuation of the traditional State regulation organized at the national level and associated to intergovernmental coordination at the international level, but by the proliferations of orders implemented either by State or non-State actors both at the infra and supra national levels. Indeed private institutions, either local or global, settle various types of self-regulations in parallel with efforts made by local and national governments together with inter-governmental organizations. The noticeable characteristic of the current situation is that there is no established hierarchy among these orders. None of them is able to impose itself on the others in the sense that the norms of the "hierarchically inferior" orders would have to comply with those of a "supreme" one.
The need for central coordination results neither in the need for traditional direct State intervention 22 , nor in the need for centrally designed regulations. The economics of multilevel governance calls for a federal institutional model enforcing a subsidiarity principle. A central and last resort device should overhang the decentralized, multilevel and multi-type process of norms implementation. Its role would not be to set-up norms, but rather to settle conflicts among norms decentrally implemented in order to guarantee, in particular, a minimum level of consistency among orders and their conformity with basic constitutional principles aimed at implementing most efficient solutions by taking into account interdependencies and the interests of the wider possible set of stakeholders as well as guaranteeing access to essential public goods.
Beyond its logical justification, the implementation of a regulator of last resort is made possible by the necessity to centrally manage the addressing system of the Internet. The mastering of the management of the addressing system by the entity that would be responsible for the regulation in the last resort will allow this entity to possess the means of its assignment. Indeed, it would enable it to exert a credible threat of excluding agents from the access to the cyber-world that it could use to have its decisions and regulations respected. In turn, only a well-designed entity should be allowed to control the system of inclusion/expulsion from the Internet.
There are, however, problems raised by the above conclusion drawn from theory, namely how this last resort authority should be constituted and implemented. In a sense, it should be at the same time the Constituent Assembly and the Supreme Court of a global "cyber republic", since it should set-up the constitutional principles and implement them. However, can such a body be instituted in the absence of an entity like a global government, or of a recognized concept of global citizenship, or of accepted processes to elect global representatives? In any case, the way the present regulation of the Internet is designed and implemented does not fit at all with these principles.
Presently, embryos of several alternative regulatory frameworks are developing in parallel with the development of an international debate on the institutional framework required to govern and regulate digital networks (and the information and knowledge based society). To a certain extent, the developing solutions are complementary since they do not address exactly the same issues. Several organizations emerged in the 1990's to oversee the technical regulation of the Internet, while more recently international forums were established to discuss the economic, political and social aspects of the regulation of the Internet. On the one hand, the former organizations could become the technical administrators of the Internet in charge of implementing the principles resulting from the "political" debates. On the other hand, these organizations and forums can be considered as resulting from alternative visions of the way the Internet and information activities should be regulated: namely, on a unilateral or multi-lateral basis. The Internet being the result of the efforts of the US Federal Government, the latter decided to delegate the technical governance of what was becoming a global network to a new type of agency, whose model had to be invented, but which should be neither an inter-governmental organization, nor an agency emanating from the United Nations. This led to the foundation of ICANN in 1998. The same year, the UN and its agencies, in particular the ITU 23 , launched a process aimed at implementing a multi-lateral system to govern the Internet grounded in international treaties.
ICANN (Internet Corporation for Assigned Names and Numbers; www.icann.org), the organization currently in charge of "governing" the addressing system of the Internet, is the core of the US unilateral strategy. It already plays a central role. By being responsible for distributing IP numbers and Domain Names, ICANN control de facto inclusion and exclusion from the Internet. ICANN draws from a tumultuous process of creation 24 and evolution. With 23 Headquartered in Geneva, the International Telecommunication Union (www.itu.int) is an international organization within the United Nations System. It was established in 1865 to facilitate the international interconnection of telegraphy, and is a unique partnership of industry -formerly the national telecommunications monopolies -and governments. The ITU develops mutually agreed non-binding recommendations aimed at enabling interconnection and interoperability among telecommunication networks, which are based on rules of interoperation (and numbering) and common standards (formerly performed by the CCITT, now called ITU-T) the development of the commercial Internet and its internationalization, numerous groups of interests pushed for the emergence of an organization that would involve all the stakeholders of the Internet. ICANN managed to include most of them. ICANN's members can therefore be either public or private organizations involved in the development or the uses of the Internet, government and public agencies, and individuals ("netizens"). Ad hoc committees group these various communities and there are complex processes of election and nomination in a wide number of committees responsible for the various domains dealt with by ICANN. In addition, ICANN coordinates with several other organizations in charge of related issues, such as the World Intellectual Property Organization (WIPO) or the Internet Engineering Task Force (IETF). Figure 1 synthesizes the complex web of committees and cooperative relationships.
Insert Figure 1 here : Internet Governance: The Current Institutional Framework The main weaknesses of ICANN in its present form are: First, the legitimacy of each member is not at all guaranteed by any accreditation process. Moreover, the relationships (and the hierarchy) among the various types of members are unclear. This results in an organization that does not guarantee that the interests of the various stakeholders are taken into account and hierarchized, nor that its decision will be consistent, nor that it can even make decision. Figure 1 illustrates well the highly complex organization of ICANN currently. Among other things, it shows clearly that the various categories of stakeholders are not considered equally in the decision process. Commercial interest, and in particular those of the dominant corporations in the digital industries, are clearly overweighed as compared to citizens and even to the (non-US) Governments.
Second, it is not autonomous since ICANN is a contractor of the US Government Moreover the contract between the US Government and ICANN is only transitory, and the technical implementation of the DNS (and therefore ICANN's decisions) is ensured by another contractor of the US Government: Verisign. As a result, ICANN is neither an independent organization, nor a strong one. It is presently not autonomous and the institutions responsible for the enforcement of ICANN's rules in the last resort (the US courts and the US Government) cannot be considered as fully legitimate, since their purpose is to protect the interests of US citizens and people, they cannot impartially protect the interests of all the stakeholders of the Internet 25 .
IETF and the IAB thus became components of ISOC. Initially ISOC was supposed to manage both the standards and the addressing system of the Internet. However, policy makers and industry lobbies denied the legitimacy of this new organization to manage the DNS. Because ISOC was supposed to be controlled by US computer scientists, these groups estimated that it was not able to take into account either the interests of non-US citizens, foreign Governments, and those of the industry, especially the owners of intellectual title deeds (especially trademarks and brands). This led to the foundation of ICANN in 1998, as a compromise between the historical Internet community and new stakeholders (governments, business community). A MoU signed with the US government established its mandate.
Given its dependence on the US Government, its lack of actual authority, and its fuzzy way of taking into account and balance the various interests of the stakeholders, ICANN's legitimacy is often challenged when it comes to transforming it into the instance responsible in the last resort for the regulation of the Internet. Because of the US Government's lack of desire to let it become a multilateral entity, a coalition of the UN, of many national Governments and of many NGOs agreed to launch an initiative aimed at (re)founding the principles of the governance of the information society. This led to the World Summit on the Information Society (WSIS) 26 responsible for establishing the basis of a global governance of digital activities. However, this initiative features three strong weaknesses: First, the objectives of the process are rather unclear. Because the various governmentsand especially the US government -did not agree on a precise agenda before launching the process, the WSIS covers all the aspects of the regulation of the information society 27 . This excessively broad agenda makes it impossible to concentrate on a specific set of fundamental questions on which agreements and compromise could be discussed. In addition, all the possible lobbies are playing a complex game in this process Second, the decision mechanisms are unclear. On the one hand, the WSIS is an intergovernmental conference organized by the UN. On the other hand, NGOs, corporations and many other categories of stakeholders are invited to contribute in the frame of a fuzzy process of consultation/contribution. Third, this process was launched without the agreement and is run without the effective participation of the instances that are currently in control of the essential resources and of the technical regulatory tools that command the governance of the Internet. In particular the US Government does not support the process, meaning that what will come out of it will most probably not be implemented 28 . 26 The UN General Assembly Resolution 56/183 (21 December 2001) endorsed the holding of the World Summit on the Information Society (http://www.itu.int/wsis/index.html) in two phases; hosted in Geneva in December 2003 and in Tunis in November 2005. The goal of a UN summit is fundamentally to settle a plan of action that could include intergovernmental conferences and the settling of new international organization. More generally, it aims at reaching consensus. The ITU was at the origin of the Summit and is in charge of organizing it. While recommending representation from governments at the highest level, the Summit also invited participation of all relevant UN bodies and other international organizations, non-governmental organizations, private sector, civil society, and media to establish a multi-stakeholder process. Several preparatory conferences, working groups and on-line consultations were run to prepare the two phases of the Summit. 27 According to the official declaration resulting from the Geneva Summit in 2003, the goals of the WSIS embrace techno-economic regulation of digital activities (provision of universal, accessible, equitable and affordable infrastructure and services; solutions aimed at guaranteeing information and network security; consumer protection, transparent, pro-competitive, technologically neutral and predictable public economic regulations. socio-political regulation of information and knowledge based activities (in particular the guarantee of privacy, pluralism and media diversity, optimal intellectual property regimes encouraging creativity and the need to share knowledge, education and the reduction of the digital divide) as well as the promotion of political and philosophical principles such as the rule of law, human rights and fundamental freedoms, the respect for cultural and linguistic diversity as well as traditions, religions … 28 From the US and ICANN point of view, the WSIS is seen as an attempt, respectively, the UN and the ITU, to gain control over the regulation of the Internet. While there is a consensus about the idea that new instances placed above the bodies in charge of the technical regulation should take into charge the social regulation necessary for the development of the information and knowledge based economy and society, there is clearly a remaining fundamental opposition between multilateralism and an alternative vision. The leader of the alternative vision is clearly the US Government. However it is supported by a set of other stakeholders -from non US Governments to commercial interests and including NGOs -that fear a regulatory framework in which Thus, the instances and the processes that are currently at the heart of the regulation of the Internet do not guarantee at all that the first best institutional architecture to regulate the digital world could be designed and implemented. This is well illustrated by the report of the Working Group on Internet Governance (WGIG) released in July 2005 to prepare the conclusion of the WSIS. The report (www.wgig.org) was prepared by a 40 member UN panel gathered from around the world and including representatives from business, academia and government. It benefited from multiple inputs provided by all kind of stakeholders. The working group was unable to agree on a single alternative. Instead, it presents four options. These range from a maximal option of a global Internet council -based on three components, one to address policy issues, one for oversight and one for global coordination -within the UN system that would take over supervision of ICANN and set international Internet policy, to a "status quo plus" arrangement that would enhance the role of ICANN's existing governmental advisory committee (GAC). Intermediate solutions are the creation of a world body to address public policy issues stemming from the work of ICANN, or the creation of a body to address a broader range of public policy issues. In a context in which the US Department of Commerce continues to claim that it has no intention of giving up its historic role as overseer of the Internet domain name and addressing system run by ICANN (stressing the need to ensure stability and security) it is likely that the WSIS will result in the setting up of several forums to coordinate efforts in favor of on-line security (from the control of spam to tracking and management of identity) to combat Internet-related crime, to harmonize business related legislation, to manage issues related to freedom of expression and human rights, etc., but will fail to implement instances able to really govern the Internet.

Toward a Step by Step Approach to the design of Regulatory Frameworks
Could a relevant order emerge from the complex process occurring today in which a wide number of individuals and organizations are acting and interacting in an un-hiearchized way either by decentrally creating (private) orders or by lobbying in the various arena creating orders (from standardization committees to intergovernmental negotiations and including national law making processes or the design of self-regulation mechanisms)? The reply could be yes if a relevant arena to harmonize these initiatives could be organized.
Before analyzing how it could be possible, it is useful to consider the current situation. On the one hand, it could be considered as chaotic. On the other hand, there are several enabling conditions for a harmonization.
While there is a proliferation of regulating entities with different status, there is no hierarchy among them. More precisely, none of these entities is able to definitively impose its order to the others (even if some entities have more "bargaining" power than others). This is due to the self-implementation capabilities provided by the digital technologies to the various norm setters. More precisely, on the one hand private norm setters can bypass public authorities to implement faked or trans-territorial orders, but on the other hand, public authorities or social communities still benefit from their traditional means to set collective orders. Thus, whatever the nature of the institutions currently involved in the settlement of rules that play a role in the regulation of the Internet, they have to "negotiate", since no norms setter is able to impose its order on other norm setters. This permanent "bargaining" is well illustrated by what is happening today in relation to file sharing in the music industry among the P2P "communities", the governments and the lobbies of the authors, artists and recording companies.
Second, "negotiation" among the various producers of collective orders is possible and necessary because most of these orders are not complete. They do not seek to measure and enforce a complete set of property rights, rather to delineate rights of use only for a limited number of uses and for a limited number of resources. In concrete terms, some private orders implement technical standards to organize on-line sharing of information, while other orders design rules to be applied to the encryption of information, while others organize auction mechanisms on specific markets, etc. While there is sometimes direct competition between two orders, much of the competition among norms setters is at the fringe of their domain of competence. There are therefore many cases in which the various orders are complementary, and the promoters of these orders have incentives to negotiate (or to adopt some open metastandards) to guarantee compatibility among their orders.
In addition to these two enabling conditions, there is a strong driving force for harmonization: the needs of users. Two phenomena are combining. First, the globalization of many activities, and in particular of the economy, makes it worthwhile for many players, especially businesses, to benefit from a seamless global information and coordination space. They have therefore a strong interest in the reduction of the trans-territorial heterogeneity of formal and informal public norms. Second, most communities behind private norms are not exclusive. They are fuzzy sets (in the sense of the fuzzy sets theory) meaning that a given individual often belongs to several communities. Individuals and organizations are also pushing for a harmonization of the norms established by private digital/on-line norms setters since it would simplify and make less costly the activities they perform in the various on-line communities they belong to 29 .
While there is competition among norm setters to establish and widen the scope of their coordination solutions (Eric Brousseau and Emmanuel Raynaud, 2005) there are creating conditions and impetus leading them to harmonize the institutional framework they are building. Indeed, since the digital space is the locus where many norms inter-act, it is permanently revealing conflicts among private orders, among public orders, and among public and private orders. Public and private norm setters cannot ignore these conflicts of norms because they reduce the value of use of their norms, and their potential adoption. Moreover, they cannot rely only on the selection process among norms, because selection takes time and is uncertain.
To satisfy the adopters of their norms or potential ones and therefore their sustainability, norm setters -whether they are public authorities, professional organizations, on-line self-organized communities, market organizers, etc. -have incentives to cooperate to solve conflict either ex-ante or ex-post.
Ex-ante, multiple decentralized negotiations already exist. Two types of negotiations occur. First, at the national level, public authorities negotiate with the private order setters, often by involving them in the processes of adapting the legal order to the opportunities and constraints raised by digital technologies and by the development of the knowledge based economy. Second, at the global level there is minimal coordination among the various private entities, where responsibilities overlap or interact. When one considers the "technical" regulation of the Internet, there are for instance various formats of coordination among the IETF, the W3C, the ICANN, and many other standards setting committees. In addition, weak coordination exists among public orders through the management of international treaties and intergovernmental organizations responsible for designing common rules to be applied to information resources and networks (WIPO, ITU, etc.). Negotiations take place also within not specialized intergovernmental organization such as the UN, the OECD, or the European Union. Negotiations do occur then, but their decentralization does not by any means guarantee convergence. Moreover, targeted and local negotiations may end-up by incompatible agreements, transforming only the "level" of discrepancies among decentrally set norms. It would therefore be useful to implement an international forum in which these decentralized negotiations could be, even weakly, coordinated. A minimal way to do so would simply be to share information and knowledge about the properties of alternative regulations. To a certain extent, a common "blackboard", such as those which characterize many on-line communities (Gensollen, 2005) would be a useful tool to allow sharing of experience and knowledge, and to incite to the adoption of more efficient practices. Of course, such ex-ante coordination could be further developed, especially under the pressure of users/citizens, who would value more universal platforms of coordination (to manage the production/distribution/sharing of information and knowledge, and to support inter-agent coordination).
One of the main drivers of the evolution of rules toward common rules is the (judicial) resolution of conflict. Indeed, when there are gains to be made by trading or cooperating, when exchanges are hindered by discrepancies among property rights systems, and among collective regulations, parties have strong incentives to have an evolving system of rules. They can by-pass the existing institutional framework, but it has a cost (either because they have to settle on alternative orders, or because the existing framework being mandatorye.g. the law -bypassing it could be costly). The alternative strategy is to try to have its conflict resolved by a mechanism responsible in the last resort for solving conflicts between the two orders. In addition to the (local) conflict resolution, such a solution has a strong advantage. It allows those who are in charge of setting collective orders to learn about the inefficiencies of the solution they implement. In addition, the way the conflict is solved can provide the two norms setters with solutions to avoid future conflicts. Put another way, conflict resolution is a powerful tool enabling not only an ex-post solving of problems due to conflicts among norms, but also making it possible to benefit from learning capabilities and innovation efforts aimed at identifying discrepancies among norms and satisfying solutions to harmonize them. Again, this call for a common arena to solve conflicts is related to the proliferation of regulations in the cyber-world. To a certain extent, the cooperation between the WIPO and ICANN played this role in the case of property rights over commercial names (trademarks and domain names; Cécile Meadel and Meryem Merzouki, 2004).
Implementing a framework, comparable in its functioning but not it its constituency to the World Trade Organizations (WTO) in which the various norm setters would be able to explicitly negotiate ex-ante and to solve ex-post conflicts would therefore be the second rank solution to the current problem raised by the proliferations of public and private orders on the Internet. It would be a second rank solution because it would be less satisfactory than the management of an ex-ante coordination based on the settling of basic constitutional rights of the global cyber-citizens and the implementation of an authority of last resorts to guarantee them. Indeed, it would take time to reach a global agreement on a set of fundamental basic rights, and the negotiations being organized among unequal norm setters, the optimality of the solution would not be guaranteed. However, an organized negotiation has two advantages over fully decentralized negotiation. Centralization is a necessary (not sufficient) condition for designing consistent "local" solutions. In addition, centralization creates the possibility of benefit from learning and knowledge sharing effects.