DarkFi

Build Status Web - dark.fi Manifesto - unsystem Book - mdbook

About DarkFi

DarkFi is a new Layer 1 blockchain, designed with anonymity at the forefront. It offers flexible private primitives that can be wielded to create any kind of application. DarkFi aims to make anonymous engineering highly accessible to developers.

DarkFi uses advances in zero-knowledge cryptography and includes a contracting language and developer toolkits to create uncensorable code.

In the open air of a fully dark, anonymous system, cryptocurrency has the potential to birth new technological concepts centered around sovereignty. This can be a creative, regenerative space - the dawn of a Dark Renaissance.

Connect to DarkFi IRC

Follow the installation instructions for the P2P IRC daemon.

Build

This project requires the Rust compiler to be installed. Please visit Rustup for instructions.

Minimum Rust version supported is 1.65.0 (stable).

The following dependencies are also required:

DependencyDebian-based
gitgit
makemake
jqjq
gccgcc
pkg-configpkg-config
openssl libsopenssl-dev

Users of Debian-based systems (e.g. Ubuntu) can simply run the following to install the required dependencies:

# apt-get update
# apt-get install -y git make jq gcc pkg-config openssl-dev

Alternatively, users can try using the automated script under contrib folder by executing:

% sh contrib/dependency_setup.sh

The script will try to recognize which system you are running, and install dependencies accordingly. In case it does not find your package manager, please consider adding support for it into the script and sending a patch.

To build the necessary binaries, we can just clone the repo, and use the provided Makefile to build the project. This will download the trusted setup params, and compile the source code.

% git clone https://github.com/darkrenaissance/darkfi
% cd darkfi/
% make

Development

If you want to hack on the source code, make sure to read some introductory advice in the DarkFi book.

Install

This will install the binaries on your system (/usr/local by default). The configuration files for the binaries are bundled with the binaries and contain sane defaults. You'll have to run each daemon once in order for them to spawn a config file, which you can then review.

# make install

Bash Completion

This will add the options auto completion of drk and darkfid.

% echo source \$(pwd)/contrib/auto-complete >> ~/.bashrc

Examples and usage

See the DarkFi book

Go Dark

Let's liberate people from the claws of big tech and create the democratic paradigm of technology.

Self-defense is integral to any organism's survival and growth.

Power to the minuteman.

Definition of Democratic Civilization

From 'The Sociology of Freedom: Manifesto of the Democratic Civilization, Volume 3' by Abdullah Ocalan.

Annotations are our own. The text is otherwise unchanged.

What is the subject of moral and political society?

The school of social science that postulates the examination of the existence and development of social nature on the basis of moral and political society could be defined as the democratic civilization system. The various schools of social science base their analyses on different units. Theology and religion prioritize society. For scientific socialism, it is class. The fundamental unit for liberalism is the individual. There are, of course, schools that prioritize power and the state and others that focus on civilization. All these unit-based approaches must be criticized, because, as I have frequently pointed out, they are not historical, and they fail to address the totality. A meaningful examination would have to focus on what is crucial from the point of view of society, both in terms of history and actuality. Otherwise, the result will only be one more discourse.

Identifying our fundamental unit as moral and political society is significant, because it also covers the dimensions of historicity and totality. Moral and political society is the most historical and holistic expression of society. Morals and politics themselves can be understood as history. A society that has a moral and political dimension is a society that is the closest to the totality of all its existence and development. A society can exist without the state, class, exploitation, the city, power, or the nation, but a society devoid of morals and politics is unthinkable. Societies may exist as colonies of other powers, particularly capital and state monopolies, and as sources of raw materials. In those cases, however, we are talking about the legacy of a society that has ceased to be.

Individualism is a state of war

There is nothing gained by labeling moral and political society—the natural state of society—as slave-owning, feudal, capitalist, or socialist. Using such labels to describe society masks reality and reduces society to its components (class, economy, and monopoly). The bottleneck encountered in discourses based on such concepts as regards the theory and practice of social development stems from errors and inadequacies inherent in them. If all of the analyses of society referred to with these labels that are closer to historical materialism have fallen into this situation, it is clear that discourses with much weaker scientific bases will be in a much worse situation. Religious discourses, meanwhile, focus heavily on the importance of morals but have long since turned politics over to the state. Bourgeois liberal approaches not only obscure the society with moral and political dimensions, but when the opportunity presents itself they do not hesitate to wage war on this society. Individualism is a state of war against society to the same degree as power and the state is. Liberalism essentially prepares society, which is weakened by being deprived of its morals and politics, for all kinds of attacks by individualism. Liberalism is the ideology and practice that is most anti-society.

The rise of scientific positivism

In Western sociology (there is still no science called Eastern sociology) concepts such as society and civilization system are quite problematic. We should not forget that the need for sociology stemmed from the need to find solutions to the huge problems of crises, contradictions, and conflicts and war caused by capital and power monopolies. Every branch of sociology developed its own thesis about how to maintain order and make life more livable. Despite all the sectarian, theological, and reformist interpretations of the teachings of Christianity, as social problems deepened, interpretations based on a scientific (positivist) point of view came to the fore. The philosophical revolution and the Enlightenment (seventeenth and eighteenth centuries) were essentially the result of this need. When the French Revolution complicated society’s problems rather than solving them, there was a marked increase in the tendency to develop sociology as an independent science. Utopian socialists (Henri de Saint-Simon, Charles Fourier, and Pierre-Joseph Proudhon), together with Auguste Comte and Émile Durkheim, represent the preliminary steps in this direction. All of them are children of the Enlightenment, with unlimited faith in science. They believed they could use science to re-create society as they wished. They were playing God. In Hegel’s words, God had descended to earth and, what’s more, in the form of the nation-state. What needed to be done was to plan and develop specific and sophisticated “social engineering” projects. There was no project or plan that could not be achieved by the nation-state if it so desired, as long as it embraced the “scientific positivism” and was accepted by the nation-state!

Capitalism as an iron cage

British social scientists (political economists) added economic solutions to French sociology, while German ideologists contributed philosophically. Adam Smith and Hegel in particular made major contributions. There was a wide variety of prescriptions from both the left and right to address the problems arising from the horrendous abuse of the society by the nineteenth-century industrial capitalism. Liberalism, the central ideology of the capitalist monopoly has a totally eclectic approach, taking advantage of any and all ideas, and is the most practical when it comes to creating almost patchwork-like systems. It was as if the right- and left- wing schematic sociologies were unaware of social nature, history, and the present while developing their projects in relation to the past (the quest for the “golden age” by the right) or the future (utopian society). Their systems would continually fragment when they encountered history or current life. The reality that had imprisoned them all was the “iron cage” that capitalist modernity had slowly cast and sealed them in, intellectually and in their practical way of life. However, Friedrich Nietzsche’s ideas of metaphysicians of positivism or castrated dwarfs of capitalist modernity bring us a lot closer to the social truth. Nietzsche leads the pack of rare philosophers who first drew attention to the risk of society being swallowed up by capitalist modernity. Although he is accused of serving fascism with his thoughts, his foretelling of the onset of fascism and world wars was quite enticing.

The increase in major crises and world wars, along with the division of the liberal center into right- and left-wing branches, was enough to bankrupt positivist sociology. In spite of its widespread criticism of metaphysics, social engineering has revealed its true identity with authoritarian and totalitarian fascism as metaphysics at its shallowest. The Frankfurt School is the official testimonial of this bankruptcy. The École Annales and the 1968 youth uprising led to various postmodernist sociological approaches, in particular Immanuel Wallerstein’s capitalist world-system analysis. Tendencies like ecology, feminism, relativism, the New Left, and world-system analysis launched a period during which the social sciences splintered. Obviously, financial capital gaining hegemony as the 1970s faded also played an important role. The upside of these developments was the collapse of the hegemony of Eurocentric thought. The downside, however, was the drawbacks of a highly fragmented social sciences.

The problems of Eurocentric sociology

Let’s summarize the criticism of Eurocentric sociology:

  1. Positivism, which criticized and denounced both religion and metaphysics, has not escaped being a kind of religion and metaphysics in its own right. This should not come as a surprise. Human culture requires metaphysics. The issue is to distinguish good from bad metaphysics.

  2. An understanding of society based on dichotomies like primitive vs. modern, capitalist vs. socialist, industrial vs. agrarian, progressive vs. reactionary, divided by class vs. classless, or with a state vs. stateless prevents the development of a definition that comes closer to the truth of social nature. Dichotomies of this sort distance us from social truth.

  3. To re-create society is to play the modern god. More precisely, each time society is recreated there is a tendency to form a new capital and power-state monopoly. Much like medieval theism was ideologically connected to absolute monarchies (sultanates and shāhanshāhs), modern social engineering as recreation is essentially the divine disposition and ideology of the nation-state. Positivism in this regard is modern theism.

  4. Revolutions cannot be interpreted as the re-creation acts of society. When thusly understood they cannot escape positivist theism. Revolutions can only be defined as social revolutions to the extent that they free society from excessive burden of capital and power.

  5. The task of revolutionaries cannot be defined as creating any social model of their making but more correctly as playing a role in contributing to the development of moral and political society.

  6. Methods and paradigms to be applied to social nature should not be identical to those that relate to first nature. While the universalist approach to first nature provides results that come closer to the truth (I don’t believe there is an absolute truth), relativism in relation to social nature may get us closer to the truth. The universe can neither be explained by an infinite universalist linear discourse or by a concept of infinite similar circular cycles.

  7. A social regime of truth needs to be reorganized on the basis of these and many other criticisms. Obviously, I am not talking about a new divine creation, but I do believe that the greatest feature of the human mind is the power to search for and build truth.

A new social science

In light of these criticisms, I offer the following suggestions in relation to the social science system that I want to define:

A more humane social nature

  1. I would not present social nature as a rigid universalist truth with mythological, religious, metaphysical, and scientific (positivist) patterns. Understanding it to be the most flexible form of basic universal entities that encompass a wealth of diversities but are tied down to conditions of historical time and location more closely approaches the truth. Any analysis, social science, or attempt to make practical change without adequate knowledge of the qualities of social nature may well backfire. The monotheistic religions and positivism, which have appeared throughout the history of civilization claiming to have found the solution, were unable to prevent capital and power monopolies from gaining control. It is therefore their irrevocable task, if they are to contribute to moral and political society, to develop a more humane analysis based on a profound self-criticism.

  2. Moral and political society is the main element that gives social nature its historical and complete meaning and represents the unity in diversity that is basic to its existence. It is the definition of moral and political society that gives social nature its character, maintains its unity in diversity, and plays a decisive role in expressing its main totality and historicity. The descriptors commonly used to define society, such as primitive, modern, slave-owning, feudal, capitalist, socialist, industrial, agricultural, commercial, monetary, statist, national, hegemonic, and so on, do not reflect the decisive features of social nature. On the contrary, they conceal and fragment its meaning. This, in turn, provides a base for faulty theoretical and practical approaches and actions related to society.

Protecting the social fabric

  1. Statements about renewing and re-creating society are part of operations meant to constitute new capital and power monopolies in terms of their ideological content. The history of civilization, the history of such renewals, is the history of the cumulative accumulation of capital and power. Instead of divine creativity, the basic action the society needs most is to struggle against factors that prevent the development and functioning of moral and political social fabric. A society that operates its moral and political dimensions freely, is a society that will continue its development in the best way.

  2. Revolutions are forms of social action resorted to when society is sternly prevented from freely exercising and maintaining its moral and political function. Revolutions can and should be accepted as legitimate by society only when they do not seek to create new societies, nations, or states but to restore moral and political society its ability to function freely.

  3. Revolutionary heroism must find meaning through its contributions to moral and political society. Any action that does not have this meaning, regardless of its intent and duration, cannot be defined as revolutionary social heroism. What determines the role of individuals in society in a positive sense is their contribution to the development of moral and political society.

  4. No social science that hopes to develop these key features through profound research and examination should be based on a universalist linear progressive approach or on a singular infinite cyclical relativity. In the final instance, instead of these dogmatic approaches that serve to legitimize the cumulative accumulation of capital and power throughout the history of civilization, social sciences based on a non-destructive dialectic methodology that harmonizes analytical and emotional intelligence and overcomes the strict subject-object mold should be developed.

The framework of moral and political society

The paradigmatic and empirical framework of moral and political society, the main unit of the democratic civilization system, can be presented through such hypotheses. Let me present its main aspects:

  1. Moral and political society is the fundamental aspect of human society that must be continuously sought. Society is essentially moral and political.

  2. Moral and political society is located at the opposite end of the spectrum from the civilization systems that emerged from the triad of city, class, and state (which had previously been hierarchical structures).

  3. Moral and political society, as the history of social nature, develops in harmony with the democratic civilization system.

  4. Moral and political society is the freest society. A functioning moral and political fabric and organs is the most decisive dynamic not only for freeing society but to keep it free. No revolution or its heroines and heroes can free the society to the degree that the development of a healthy moral and political dimension will. Moreover, revolution and its heroines and heroes can only play a decisive role to the degree that they contribute to moral and political society.

  5. A moral and political society is a democratic society. Democracy is only meaningful on the basis of the existence of a moral and political society that is open and free. A democratic society where individuals and groups become subjects is the form of governance that best develops moral and political society. More precisely, we call a functioning political society a democracy. Politics and democracy are truly identical concepts. If freedom is the space within which politics expresses itself, then democracy is the way in which politics is exercised in this space. The triad of freedom, politics, and democracy cannot lack a moral basis. We could refer to morality as the institutionalized and traditional state of freedom, politics, and democracy.

  6. Moral and political societies are in a dialectical contradiction with the state, which is the official expression of all forms of capital, property, and power. The state constantly tries to substitute law for morality and bureaucracy for politics. The official state civilization develops on one side of this historically ongoing contradiction, with the unofficial democratic civilization system developing on the other side. Two distinct typologies of meaning emerge. Contradictions may either grow more violent and lead to war or there may be reconciliation, leading to peace.

  7. Peace is only possible if moral and political society forces and the state monopoly forces have the will to live side by side unarmed and with no killing. There have been instances when rather than society destroying the state or the state destroying society, a conditional peace called democratic reconciliation has been reached. History doesn’t take place either in the form of democratic civilization—as the expression of moral and political society—or totally in the form of civilization systems—as the expression of class and state society. History has unfolded as intense relationship rife with contradiction between the two, with successive periods of war and peace. It is quite utopian to think that this situation, with at least a five-thousand-year history, can be immediately resolved by emergency revolutions. At the same time, to embrace it as if it is fate and cannot be interfered with would also not be the correct moral and political approach. Knowing that struggles between systems will be protracted, it makes more sense and will prove more effective to adopt strategic and tactical approaches that expand the freedom and democracy sphere of moral and political society.

  8. Defining moral and political society in terms of communal, slave-owning, feudal, capitalist, and socialist attributes serves to obscure rather than elucidate matters. Clearly, in a moral and political society there is no room for slave-owning, feudal, or capitalist forces, but, in the context of a principled reconciliation, it is possible to take an aloof approach to these forces, within limits and in a controlled manner. What’s important is that moral and political society should neither destroy them nor be swallowed up by them; the superiority of moral and political society should make it possible to continuously limit the reach and power of the central civilization system. Communal and socialist systems can identify with moral and political society insofar as they themselves are democratic. This identification is, however, not possible, if they have a state.

  9. Moral and political society cannot seek to become a nation-state, establish an official religion, or construct a non-democratic regime. The right to determine the objectives and nature of society lies with the free will of all members of a moral and political society. Just as with current debates and decisions, strategic decisions are the purview of society’s moral and political will and expression. The essential thing is to have discussions and to become a decision-making power. A society who holds this power can determine its preferences in the soundest possible way. No individual or force has the authority to decide on behalf of moral and political society, and social engineering has no place in these societies.

Liberating democratic civilization from the State

When viewed in the light of the various broad definitions I have presented, it is obvious that the democratic civilization system—essentially the moral and political totality of social nature—has always existed and sustained itself as the flip side of the official history of civilization. Despite all the oppression and exploitation at the hands of the official world-system, the other face of society could not be destroyed. In fact, it is impossible to destroy it. Just as capitalism cannot sustain itself without noncapitalist society, civilization— the official world system— also cannot sustain itself without the democratic civilization system. More concretely the civilization with monopolies cannot sustain itself without the existence of a civilization without monopolies. The opposite is not true. Democratic civilization, representing the historical flow of the system of moral and political society, can sustain itself more comfortably and with fewer obstacles in the absence of the official civilization.

I define democratic civilization as a system of thought, the accumulation of thought, and the totality of moral rules and political organs. I am not only talking about a history of thought or the social reality within a given moral and political development. The discussion does, however, encompass both issues in an intertwined manner. I consider it important and necessary to explain the method in terms of democratic civilization’s history and elements, because this totality of alternate discourse and structures are prevented by the official civilization. I will address these issues in subsequent sections.

Recommended Books

Core Texts

  • Manifesto for a Democratic Civilization parts 1, 2 & 3 by Ocalan. This are a good high level overview of history, philosophy and spiritualism talking about the 5000 year legacy of state civilization, the development of philosophy and humanity's relationship with nature.
  • New Paradigm in Macroeconomics by Werner explains how economics and finance work on a fundamental level. Emphasizes the importance of economic networks in issuing credit, and goes through all the major economic schools of thought.
  • Authoritarian vs Democratic Technics by Mumford is a short 10 page summary of his books The Myth of the Machine parts 1 & 2. Mumford was a historian and philosopher of science and technology. His books describe the two dominant legacies within technology; one enslaving humanity, and the other one liberating humanity from the state.

Philosophy

  • The Story of Philosophy by Will Durant
  • The Sovereign Individual is very popular among crypto people. Makes several prescient predictions including about cryptocurrency, algorithmic money and the response by nation states against this emeregent technology. Good reading to understand the coming conflict between cryptocurrency and states.

Python

  • Python Crash Course by Eric Matthes. Good beginner text.
  • O'Reilly books: Python Data Science, Python for Data Analysis

Rust

  • The Rust Programming Language from No Starch Press. Good intro to learn Rust.
  • Rust for Rustaceans from No Starch Press is an advanced Rust book.

Mathematics

Abstract Algebra

  • Pinter is your fundamental algebra text. Everybody should study this book. My full solutions here.
  • Basic Abstract Algebra by Dover is also a good reference.
  • Algebra by Dummit & Foote. The best reference book you will use many times. Just buy it.
  • Algebra by Serge Lang. More advanced algebra book but often contains material not found in the D&F book.

Elliptic Curves

  • Washington is a standard text and takes a computational approach. The math is often quite obtuse because he avoids introducing advanced notation, instead keeping things often in algebra equations.
  • Silverman is the best text but harder than Washington. The material however is rewarding.

Algebraic Geometry

  • Ideals, Varieties and Algorithms by Cox, Little, O'Shea. They have a follow up advanced graduate text called Using Algebraic Geometry. It's the sequel book explaining things that were missing from the first text.
  • Hartshorne is a famous text.

Commutative Algebra

  • Atiyah-MacDonald. Many independent solution sheets online if you search for them. Or ask me ;)

Algebraic Number Theory

  • Algebraic Number Theory by Frazer Jarvis, chapters 1-5 (~100 pages) is your primary text. Book is ideal for self study since it has solutions for exercises.
  • Introductory Algebraic Number Theory by Alaca and Williams is a bit dry but a good supplementary reference text.
  • Elementary Number Theory by Jones and Jones, is a short text recommended in the preface to the Jarvis book.
  • Algebraic Number Theory by Milne, are course notes written which are clear and concise.

Cryptography

ZK

Reading Maths Books

Finding Texts for Study

You start first with a topic you want to learn about. Then you research texts to study from. Broadly speaking, they are:

  • Easy-reading high school books. Good if you are very short on time.
  • Undergrad textbooks, such as Springer undergraduate books. They are a good intro to a subject, or if studying an advanced book then you will want one or two of these as supplementary material for understanding difficult concepts.
  • Graduate level books usually are the best but require a lot of effort put in. Concepts and questions will need to be looked up and cross referenced with other materials. Examples include the yellow Springer books.

Usually you will follow one main text on a topic, but with a few other supplementary books as backup. Often you get stuck on a concept in the main text, and the supplement books will assist you to make sense by looking at things from a different explanation. Re-phrasing the same idea using different words can make a big difference in dicephering some theorem or object.

Video Courses

There are many high quality online courses following important texts. They explain the main core forums, focusing your attention on the key ideas and explaining things in an intuitive non-formal manner.

Favourites:

Getting Excited, Taking a High Level View

Take a look at the contents. Familiarize yourself with the structure of the book. Make note of topics that you will learn and master. Get excited about the truths that you will unlock. You will come back here every periodically to remember why you are studying and where you are going.

Make a lesson plan. Often the first chapter of a new topic is important, but if you're already familiar then maybe you can jump to advanced material.

Be aware if you struggle too much at the advanced level, and make no progress at all then it's a signal to swallow your pride, be humble and go down to a lower level before moving up again. We take shots, but sometimes we have to take a few steps back. The tortoise beats the hare.

However you must struggle. Don't be a weakling. Fight to rise up. Give it your focus, dedication and attention. Get into the zone, or rausch. You evolve because it is hard.

Reading the Chapter

Now you've chosen your chapter. Do a light first-pass read through it. Focus not on the details but the main theorems and structure of what you're learning. Try to understand from a conceptual level the main ideas and how they will fit together.

It's normal for the end of the chapter to feel increasingly cryptic and unintelligible.

Now return to the beginning of the chapter and begin seriously reading it. Make sure to follow the logic of ideas and understand what new objects are. You might get stuck on a difficult idea or long proof. Feel free to skip over these and return back to them after. Many of the concepts will be new, and you will be awkward in your dealing with them. Do not worry as the more familiar you become with this subject, your understanding will become solid.

As you work through the chapter towards the end, you are learning where all the theorems, definitions and proofs are. You will likely return back to these as you try to solve questions.

While you're reading through, you will likely pass back over theorems you tried to understand earlier but skipped over. If they still don't make sense, then it's fine to again put them to the side and return back to them again after.

In this way we are reading a chapter in several passes, going back through past material as we go forwards or try to solve questions. We also might sideline material in the beginning and decide to look more into them later.

Eventually our familiarity with the chapter is strong, and everything (more or less) makes sense.

Solving Questions

When you are stuck, feel free to ask others in the team, or post questions on math stackexchange if nobody knows.

You will need to research things, searching the web and studying the supplement books.

I tend to slightly prefer books with solutions to questions for self study.

You should always do questions. As many as possible. For core subjects, always attempt to do all or most of the questions, unless there are far too many.

When you are shorter on time or studying a subject on the side, you may choose to pick out a sample of questions with a mix of important looking topics and others which grab your attention or pique your curiosity.

Notes for developers

Making life easy for others

Write useful commit messages.

If your commit is changing a specific module in the code and not touching other parts of the codebase (as should be the case 99% of the time), consider writing a useful commit message that also mentions which module was changed.

For example, a message like:

added foo

is not as clear as

crypto/keypair: Added foo method for Bar struct.

Also keep in mind that commit messages can be longer than a single line, so use it to your advantage to explain your commit and intentions.

ChangeLog

Whenever a major change or sub-project is completed, a summary must be noted in the ChangeLog. Think of this as a bulletin board where the rest of the team is notified of important progress.

As we move through the stages, the current yyyy-mm-dd marker is updated with the current date, and a new section above is created.

cargo fmt pre-commit hook

To ensure every contributor uses the same code style, make sure you run cargo fmt before committing. You can force yourself to do this by creating a git pre-commit hook like the following:

#!/bin/sh
if ! cargo fmt -- --check >/dev/null; then
    echo "There are some code style issues. Run 'cargo fmt' to fix it."
    exit 1
fi

exit 0

Place this script in .git/hooks/pre-commit and make sure it's executable by running chmod +x .git/hooks/pre-commit.

Testing crate features

Our library heavily depends on cargo features. Currently there are more than 650 possible combinations of features to build the library. To ensure everything can always compile and works, we can use a helper for cargo called cargo hack.

The Makefile provided in the repository is already set up to use it, so it's enough to install cargo hack and run make check.

Etiquette

These are not hard and fast rules, but guidance for team members working together. This allows us to coordinate more effectively.

AbbrevMeaningDescription
gmgood morningReporting in
gngood nightLogging off for the day
+++thumbs upUnderstood, makes sense
afk*away from keyboardShutting down the computer so you will lose messages sent to you
b*backReturning back after leaving
brbbe right backIf you are in a meeting and need to leave for a few mins. For example, maybe you need to grab a book.
one secone secondYou need to search something on the web, or you are just doing the task (example: opening the file).

* once we have proper syncing implemented in ircd, these will become less relevant and not needed.

Another option is to run your ircd inside a persistant tmux session, and never miss messages.

P2P port ranges

Standard port ranges used by DarkFi.

  • lilith: 25551
  • darkfid-sync: 33032
  • darkfid-consensus: 33033
  • ircd: 25551
  • taud: 23331
  • darkwikid: 24331

Architecture design

This section of the book shows the software architecture of DarkFi and the network implementations.

For this phase of development we organize into teams lead by a single surgeon. The role of the team is to give full support to the surgeon and make his work effortless and smooth.

ComponentDescriptionSurgeonCopilotAssistantStatus
consensusAlgorithm for blockchain consensuserraggdasProgress
zk / cryptoZK compiler and crypto algosparnarMature
wasmWASM smart contract systemparnarxsanProgress
netp2p network protocol codeaggxsannarMature
blockchainconsensus + net + dberrdasEasy
bridgeDevelop robust & secure multi-chain bridge architectureparxsanNone
tokenomicsResearch and define DRK tokenomicsxenoerrnarStarting
utilVarious utilities and toolingnarxsandasProgress
archArchitecture, project management and integrationnarparProgress

Priorities:

  1. Consensus
    1. Settle on final algorithm. Currently fixing the reward function.
      1. Todos are leader proof, rewards proof, balance proof and ledger API
    2. Review the contracts
    3. Create the blockchain API
  2. WASM
    1. Update current WASM code
    2. Experiment and draft docs
    3. Begin to implement smart contract subsystem
    4. Migrate dao and otc applications over
  3. util
    1. Simulate event graph subsystem
    2. Create underlying event graph subsystem
    3. Create abstraction layer and APIs
    4. Begin to rewire ircd and taud
    5. Update taud to CRDTs
    6. Q&A on wallet application

Deferred (future):

  1. Migration away from SQLite for WalletDb to file system db based off git porcelain.

Release Cycle

gantt
    title Release Cycle
    dateFormat  DD-MM-YYYY
    axisFormat  %m-%y
    section Phases
    Dcon0            :done, d0, 01-01-2022, 01-04-2022
    Dcon1            :      d1, after d0,   23-12-2022
    Dcon2            :      d2, 23-08-2022, 23-02-2023
    Dcon3            :      d3, after d2,   60d
    Dcon4            :      d4, after d3,   14d
    Dcon5            :      d5, after d4,   7d

Phase Description Duration Details Version
Dcon0 Research Research new techniques, draft up architecture design documents and modify the specs.

During this phase the team looks into new experimental techniques and begins to envision how the product will evolve during the next phase of the cycle.

pre-alpha
Dcon1 New features and changes Add big features and merge branches. Risky changes that are likely to cause bugs or additional work must be done before the end of this phase.

The first 10 weeks overlap with the Dcon3 & Dcon4 phases of the previous release, and many developers will focus on bug fixing in those first weeks.

Developers dedicate a steady 1-2 days/week to the bug tracker, focusing on triaging and newly introduced bugs.

alpha
Dcon2 Improve and stabilize Work to improve, optimize and fix bugs in new and existing features. Only smaller and less risky changes, including small features, should be made in this phase.

If a new feature is too unstable or incomplete, it will be reverted before the end of this phase. Developers spend 2-3 days/week in the bug tracker, triaging, fixing recently introduced or prioritized module bugs.

alpha
Dcon3 Bug fixing only 2 months Focus on bug fixing and getting the release ready.

Development moves to the stable stabilizing branch. In master Dcon1 for the next release starts. stable is regularly merged into master.

High priority bugs dictate how much time developers will spend in the tracker as oppose to work on the next release Dcon1 features.

beta
Dcon4 Prepare release 2 weeks Stable branch is frozen to prepare for the release. Only critical and carefully reviewed bug fixes allowed.

Release candidate and release builds are made. Developers spend a short time 5 days/week with an eye in the tracker for any unexpected high priority regression.

release candidate
Dcon5 Release 1 week Stage where the final builds are packaged for all platforms, last tweaks to the logs, memes, social media, video announcements.

The final switch is flicked on dark.fi for the new release to show up on the Download page.

release

Overview

DarkFi is a layer one proof-of-stake blockchain that supports anonymous applications. It is currently under development. This overview will outline a few key terms that help explain DarkFi.

Cashier: The Cashier is the entry and exit point to the DarkFi network from other blockchains such as Ethereum, Bitcoin and Solana. It is essentially the bridge. Its role is to exchange cryptocurrency assets for anonymous darkened tokens that are pegged to the underlying currency, and visa versa. Currently, the role of the Cashier is trusted and centralized. As a next step, DarkFi plans to implement trust-minimized bridges and eventually fully trustless bridges.

Blockchain: Once new anonymous tokens (e.g. dETH) have been issued, the Cashier posts that data on the blockchain. This data is encrypted and the transaction link is broken.

The DarkFi blockchain is currently using a very simple consensus protocol called Streamlet. The blockchain is currently in devnet phase. This is a local testnet ran by the DarkFi community. Currently, the blockchain has no consensus token. DarkFi is working to upgrade to a privacy-enhanced proof-of-stake algorithm called Ouroborus Crypsinous.

Wallet: A wallet is a portal to the DarkFi network. It provides the user with the ability to send and receive anonymous darkened tokens. Each wallet is a full node and stores a copy of the blockchain. All contract execution is done locally on the DarkFi wallet.

P2P Network: The DarkFi ecosystem runs as a network of P2P nodes, where these nodes interact with each other over specific protocols (see node overview). Nodes communicate on a peer-to-peer network, which is also home to tools such as our P2P irc and P2P task manager tau.

ZK smart contracts: Anonymous applications on DarkFi run on proofs that enforce an order of operations. We call these zero-knowledge smart contracts. Anonymous transactions on DarkFi is possible due to the interplay of two contracts, mint and burn (see the sapling payment scheme). Using the same method, we can define advanced applications.

zkas: zkas is the compiler used to compile zk smart contracts in its respective assembly-like language. The "assembly" part was chosen as it's the bare primitives needed for zk proofs, so later on the language can be expanded with higher-level syntax. Zkas enables developers to compile and inspect contracts.

zkVM: DarkFi's zkVM executes the binaries produced by zkas. The zkVM aims to be a general-purpose zkSNARK virtual machine that empowers developers to quickly prototype and debug zk contracts. It uses a trustless zero-knowledge proof system called Halo 2 with no trusted setup.

Anonymous assets

DarkFi network allows for the issuance and transfer of anonymous assets with an arbitrary number of parameters. These tokens are anonymous, relying on zero-knowledge proofs to ensure validity without revealing any other information.

New tokens are created and destroyed every time you send an anonymous transaction. To send a transaction on DarkFi, you must first issue a credential that commits to some value you have in your wallet. This is called the Mint phase. Once the credential is spent, it destroys itself: what is called the Burn.

Through this process, the link between inputs and outputs is broken.

Mint

During the Mint phase we create a new coin , which is bound to the public key . The coin is publicly revealed on the blockchain and added to the merkle tree, which is stored locally on the DarkFi wallet.

We do this using the following process:

Let be the coin's value. Generate random , and serial .

Create a commitment to these parameters in zero-knowledge:

Check that the value commitment is constructed correctly:

Reveal and . Add to the Merkle tree.

Burn

When we spend the coin, we must ensure that the value of the coin cannot be double spent. We call this the Burn phase. The process relies on a nullifier, which we create using the secret key for the public key . Nullifiers are unique per coin and prevent double spending. is the Merkle root. is the coin's value.

Generate a random number .

Check that the secret key corresponds to a public key:

Check that the public key corresponds to a coin which is in the merkle tree :

Check that the value commitment is constructed correctly:

Reveal , and . Check is a valid Merkle root. Check does not exist in the nullifier set.

The zero-knowledge proof confirms that binds to an unrevealed value , and that this coin is in the Merkle tree, without linking to . Once the nullifier is produced the coin becomes unspendable.

Adding values

Assets on DarkFi can have any number of values or attributes. This is achieved by creating a credential and hashing any number of values and checking that they are valid in zero-knowledge.

We check that the sum of the inputs equals the sum of the outputs. This means that:

And that is a valid point on the curve .

This proves that where is a secret blinding factor for the amounts.

Diagram

Dynamic Proof of Stake

Overview

Darkfi is based off Ouroboros Crypsinous, a privacy focused proof-of-stake algorithm. Below you may find the technical specifications of DarkFi's blockchain implementation.

Blockchain

Blockchain is a series of epochs: it's a tree of chains, , , , , the chain of the max length in is the driving chain .

Crypsinous Blockchain is built on top of Zerocash sapling scheme, and Ouroboros Genesis blockchain. Each part stores it's own local view of the Blockchain . is a sequence of blocks (i>0), where each it's a vector of that aren't yet in . the Blocks' \emph{st} is the block data, and \emph{h} is the hash of that data. the commitment of the newly created coin is: , is the clock current time. \emph{} is the coin's serial number revealed to spend the coin. is is from random oracle evaluated at , is the following epoch's seed. \emph{ptr} is the hash of the previous block, is the NIZK proof of the LEAD statement.

st transactions

the blockchain view is a chain of blocks, each block , while st being the merkle tree structure of the validated transactions received through the network, that include transfer, and public transactions.

LEAD statement

for , and for tuple iff:

  • .
  • . note here the nonce of the new coin is deterministically driven from the nonce of the old coin, this works as resistance mechanism to allow the same coin to be eligible for leadership more than once in the same epoch.
  • .
  • \emph{path} is a valid Merkle tree path to in the tree with the root \emph{root}.
  • \emph{} is a valid path to a leaf at position in a tree with a root .
  • note that this process involves renewing the old coin who's serial number gets revealed(proof of spending), becoming an input, to of the same value,

transfer transaction

transfer transaction of the pouring mechanism of input: old coin, and public coin, with output: new return change coin, and further recipient coin. such that input total value is forward secure encryption of to . the commitment of the new coins , is:

spend proof

the spend proofs of the old coins are revealed.

NIZK proof

for the circuit inputs, and witnesses

is a proof for the following transfer statement using zerocash pouring mechanism.

path_1\text{ is a valid path to } cm_{c_1} \text{ in a tree with the root} \emph{ root}

path_2\text{ is a valid path to } cm_{c_2} \text{ in a tree with the root} \emph{ root}, sn_{c_2}=PRF_{root_{sk_{c_1}^{COIN}}}^{zdrv}(\rho_{c_1})

toward better decentralization in ouroboros

the randomization of the leader selection at each slot is hinged on the random , , , those three values are dervied from , and root of the secret keys, the root of the secret keys for each stakeholder can be sampled, and derived beforehand, but is a response to global random oracle, so the whole security of the leader selection is hinged on .

solution

to break this centeralization, a decentralized emulation of functionality for calculation of: note that first transaction in the block, is the proof transaction.

Epoch

An epoch is a vector of blocks. Some of the blocks might be empty if there is no winnig leader.

Leader selection

At the onset of each slot each stakeholder needs to verify if it's the weighted random leader for this slot.

check if the random y output is less than some threshold

This statement might hold true for zero or more stakeholders, thus we might end up with multiple leaders for a slot, and other times no leader. Also note that no node would know the leader identity or how many leaders are there for the slot, until it receives a signed block with a proof claiming to be a leader.

is random nonce generated from the blockchain, is block id

Note that , : the active slot coefficient is the probability that a party holding all the stake will be selected to be a leader. Stakeholder is selected as leader for slot j with probability , is relative stake.

The following are absolute stake aggregation dependent leader selection family of functions.

Linear family functions

In the previous leader selection function, it has the unique property of independent aggregation of the stakes, meaning the property of a leader winning leadership with stakes is independent of whether the stakeholder would act as a pool of stakes, or distributed stakes on competing coins. "one minus the probability" of winning leadership with aggregated stakes is , the joint "one minus probability" of all the stakes (each with probability winning aggregated winning the leadership thus:

A non-exponential linear leader selection can be:

Dependent aggregation

Linear leader selection has the dependent aggregation property, meaning it's favorable to compete in pools with sum of the stakes over aggregated stakes of distributed stakes:

let's assume the stakes are divided to stakes of value for , note that , thus competing with single coin of the sum of stakes held by the stakeholder is favorable.

Scalar linear aggregation dependent leader selection

A target function T with scalar coefficients can be formalized as let's assume , and then: then the lead statement is for example for a group order or l= 24 bits, and maximum value of , then lead statement:

Competing max value coins

For a stakeholder with absolute stake, it's advantageous for the stakeholder to distribute stakes on competing coins.

Inverse functions

Inverse lead selection functions doesn't require maximum stake, most suitable for absolute stake, it has the disadvantage that it's inflating with increasing rate as time goes on, but it can be function of the inverse of the slot to control the increasing frequency of winning leadership.

Leader selection without maximum stake upper limit

The inverse leader selection without maximum stake value can be and inversely proportional with probability of winning leadership, let it be called leadership coefficient.

Decaying linear leader selection

As the time goes one, and stakes increase, this means the combined stakes of all stakeholders increases the probability of winning leadership in next slots leading to more leaders at a single slot, to maintain, or to be more general to control this frequency of leaders per slot, c (the leadership coefficient) need to be function of the slot , i.e where is epoch size (number of slots in epoch).

Pairing leader selection independent aggregation function

The only family of functions that are isomorphic to summation on multiplication (having the independent aggregation property) is the exponential function, and since it's impossible to implement in plonk, a re-formalization of the lead statement using pairing that is isomorphic to summation on multiplication is an option.

Let's assume is isomorphic function between multiplication and addition, , thus: then the only family of functions satisfying this is the exponential function

no solution for the lead statement parameters, and constants defined over group of integers.

assume there is a solution for the lead statement parameters and constants defined over group of integers. for the statement , such that S where is the maximum stake value being , following from the previous proof that the family of function haveing independent aggregation property is the exponential function , and , the smallest value satisfying f is , then note that since thus , contradiction.

target T n term approximation

  • s is stake, and is total stake.

Leaky non-resettable beacon

Built on top of globally synchronized clock, that leaks the nonce of the next epoch a head of time (thus called leaky), non-resettable in the sense that the random nonce is deterministic at slot s, while assuring security against adversary controlling some stakeholders.

For an epoch j, the nonce is calculated by hash function H, as:

v is the concatenation of the value in all blocks from the beginning of epoch to the slot with timestamp up to , note that k is a persistence security parameter, R is the epoch length in terms of slots.

Appendix

This section gives further details about the structures that will be used by the protocol.

Blockchain

FieldTypeDescription
blocksVec<Block>Series of blocks consisting the Blockchain
FieldTypeDescription
versionu8Version
previousblake3HashPrevious block hash
epochu64Epoch
slotu64Slot UID
timestampTimestampBlock creation timestamp
rootMerkleRootRoot of the transaction hashes merkle tree

Block

FieldTypeDescription
magicu8Magic bytes
headerblake3HashHeader hash
txsVec<blake3Hash>Transaction hashes
lead_infoLeadInfoBlock leader information

LeadInfo

FieldTypeDescription
signatureSignatureBlock owner signature
public_inputsVec<pallas::Base>Nizk proof public inputs
serial_numberpallas::Basecompeting coin's nullifier
eta[u8; 32]randomness from the previous epoch
proofVec<u8>Nizk Proof the stakeholder is the block owner
offsetu64Slot offset block producer used
leadersu64Block producer leaders count

Consensus

This section of the book describes how nodes participating in the DarkFi blockchain achieve consensus.

Glossary

NameDescription
ConsensusAlgorithm for reaching blockchain consensus between participating nodes
Nodedarkfid daemon participating in the network
SlotSpecified timeframe for block production, measured in seconds(default=20)
EpochSpecified timeframe for blockchain events, measured in slots(default=10)
LeaderBlock producer
Unproposed TransactionTransaction that exists in the memory pool but has not yet been included in a block
Block proposalBlock that has not yet been appended onto the canonical blockchain
P2P networkPeer-to-peer network on which nodes communicate with eachother
FinalizationState achieved when a block and its contents are appended to the canonical blockchain
ForkChain of block proposals that begins with the last block of the canonical blockchain

Node main loop

As described in previous chapter, DarkFi is based on Ouroboros Crypsinous. Therefore, block production involves the following steps:

At the start of every slot, each node runs a leader selection algorithm to determine if they are the slot's leader. If successful, they can produce a block containing unproposed transactions. This block is then appended to the largest known fork and shared with rest of the nodes on the P2P network as a block proposal.

Before the end of every slot each node triggers a finalization check, to verify which block proposals can be finalized onto the canonical blockchain. This is also known as the finalization sync period.

Pseudocode:

loop {
    wait_for_next_slot_start()

    if epoch_changed() {
        create_competing_coins()   
    }

    if is_slot_leader() {
        block = propose_block()
        p2p.broadcast_block(block)
    }

    wait_for_slot_end()

    chain_finalization()
}

Listening for blocks

Each node listens to new block proposals concurrently with the main loop. Upon receiving block proposals, nodes try to extend the proposals onto a fork that they hold in memory. This process is described in the next section.

Fork extension

Since there can be more than one slot leader, each node holds a set of known forks in memory. When a node becomes a leader, they extend the longest fork they hold.

Upon receiving a block, one of the following cases may occur:

DescriptionHandling
Block extends a known fork at its endAppend block to fork
Block extends a known fork not at its endCreate a new fork up to the extended block and append the new block
Block extends canonical blockchainCreate a new fork containing the new block
Block doesn't extend any known chainIgnore block

Visual Examples

SympolDescription
[C]Canonical(finalized) blockchain block
[C]--...--[C]Sequence of canonical blocks
[Ln]Proposal produced by Leader n
FnFork name to identify them in examples
+--Appending a block to fork
/--Dropped fork

Starting state:

               |--[L0] <-- F0
[C]--...--[C]--|
               |--[L1] <-- F1

Case 1

Extending F0 fork with a new block proposal:

               |--[L0]+--[L2] <-- F0
[C]--...--[C]--|
               |--[L1]        <-- F1

Case 2

Extending F0 fork at [L0] slot with a new block proposal, creating a new fork chain:

               |--[L0]--[L2]   <-- F0
[C]--...--[C]--|
               |--[L1]         <-- F1
               |
               |+--[L0]+--[L3] <-- F2
Case 3

Extending the canonical blockchain with a new block proposal:

               |--[L0]--[L2] <-- F0
[C]--...--[C]--|
               |--[L1]       <-- F1
               |
               |--[L0]--[L3] <-- F2
               |
               |+--[L4]      <-- F3

Finalization

When the finalization sync period kicks in, each node looks up the longest fork chain it holds. This must be at least 3 blocks long and there must be no other fork chain with same length. If such a fork chain exists, nodes finalize all block proposals up to the last one by appending them to the canonical blockchain.

Once finalized, all other fork chains are removed from the memory pool. Practically this means that no finalization can occur while there are competing fork chains of the same length. In such a case, finalization can only occur when we have a a slot with a single leader.

We continue Case 3 from the previous section to visualize this logic. On slot 5, a node observes 2 proposals. One extends the F0 fork, and the other extends the F2 fork:

               |--[L0]--[L2]+--[L5a] <-- F0
[C]--...--[C]--|
               |--[L1]               <-- F1
               |
               |--[L0]--[L3]+--[L5b] <-- F2
               |
               |--[L4]               <-- F3

Since we have two competing fork chains finalization cannot occur.

On next slot, a node only observes 1 proposal. So it extends the F2 fork:

               |--[L0]--[L2]--[L5a]        <-- F0
[C]--...--[C]--|
               |--[L1]                     <-- F1
               |
               |--[L0]--[L3]--[L5b]+--[L6] <-- F2
               |
               |--[L4]                     <-- F3

When the finalization sync period starts, the node finalizes fork F2 and all other forks get dropped:

               |/--[L0]--[L2]--[L5a]      <-- F0
[C]--...--[C]--|
               |/--[L1]                   <-- F1
               |
               |--[L0]--[L3]--[L5b]--[L6] <-- F2
               |
               |/--[L4]                   <-- F3

This results in the following state:

[C]--...--[C]--|--[L6]

The canonical blockchain contains blocks L0, L3 and L5b from fork F2.

DarkFi Node Architecture (DNA)

The DarkFi ecosystem runs as a network of P2P nodes, where these nodes interact with each other over specific programs (or layers). In this section, we'll explain how each of the layers fit together and when combined create a functioning network that becomes DarkFi.

The layers are organized as a bottom-up pyramid, much like the DarkFi logo:

We will start with the top-level daemon - validatord - which serves as the consensus and data storage layer, then we will explain darkfid and its communication with the layer above (validatord), and the layer below (drk).

An abstract view of the network looks like the following:

[drk] <--> [darkfid] <--> [validatord] <-+
                                         |
[drk] <--> [darkfid] <--> [validatord] <-+

validatord

validatord is the DarkFi consensus and data storage layer. Everyone that runs a validator participates in the network as a data archive, and is able to store incoming transactions, and relay them to other validators over the P2P network and protocol. Additionally, storing this data allows others to replicate it and participate in the same way.

Provided there is a locked stake on a running validator, the node can also participate in the Proof-of-Stake consensus, enabling the ability to vote on incoming transactions rather than just relaying (and validating) them.

In case the node is not participating in the consensus, it should still relay incoming transactions to other (ideally consensus-participating) validators in the network.

Inner workings

In its database, validatord stores transactions and blocks that have reached consensus. This is commonly known as a "blockchain". The blockchain is a shared state that is replicated between all validators in the network.

Additionally, validators keep a pool of incoming transactions and proposed blocks, which get validated and voted on by the consensus-participating validators.

The lifetime of an incoming transaction (and block) is as follows:

  1. Wait for a transaction
  2. Validate incoming transaction (and go back to 1. if invalid)
  3. Broadcast transaction to other validators in the network
  4. Other validators validate transaction (and go back to 1. if invalid)
  5. Leader validates the state transition and proposes a block
  6. Consensus-participating nodes validate the state transition and vote on the proposed block if the state transition is valid.
  7. If the block is confirmed, it is appended to the blockchain and is replicated between all validators in the network.

darkfid

TODO: Initial sync and retrieving wallet state?

darkfid is the client layer of DarkFi used for wallet management and transaction broadcasting. The wallet keeps a history of balances, coins, nullifiers, and merkle roots that are necessary in order to create new transactions.

By design, darkfid is a light client, since validatord stores all the blockchain data, and darkfid can simply query for anything it is interested in. This allows us to avoid data duplication and simply utilize our modular architecture. This also means that darkfid can easily be replaced with more specific tooling, if need be.

Inner workings

Using the P2P network and protocol, darkfid can subscribe to validatord in order to receive new nullifiers and merkle roots whenever a new block is confirmed. This allows darkfid to update its local state and enables it to create new valid transactions.

darkfid exposes a JSON-RPC endpoint for clients to interact with it. This allows a number of things, such as: listing balances, creating and submitting transactions, key management, and more.

When creating a new transactions, darkfid uses the local synced state in order to create new coins and combine them in a transaction. This transaction is then submitted to the above validator layer where the transaction will get validated and voted on in order to be included into a block.

drk

drk is a client tool that interacts with darkfid in a user-friendly way and provides a command-line interface to the DarkFi network and its functionality.

The interaction with darkfid is done over the JSON-RPC protocol and communicates with the endpoint exposed by darkfid.

Anonymous Smart Contracts

Every full node is a verifier.

Prover is the person executing the smart contract function on their secret witness data. They are also verifiers in our model.

Lets take a pseudocode smart contract:

contract Dao {
    # 1: the DAO's global state
    dao_bullas = DaoBulla[]
    proposal_bullas = ProposalBulla[]
    proposal_nulls = ProposalNull[]

    # 2. a public smart contract function
    #    there can be many of these
    fn mint(...) {
        ...
    }

    ...
}

Important Invariants

  1. The state of a contract (the contract member values) is globally readable but only writable by that contract's functions.
  2. Transactions are atomic. If a subsequent contract function call fails then the earlier ones are also invalid. The entire tx will be rolled back.
  3. foo_contract::bar_func::validate::state_transition() is able to access the entire transaction to perform validation on its structure. It might need to enforce requirements on the calldata of other function calls within the same tx. See DAO::exec().

Global Smart Contract State

Internally we represent this smart contract like this:

mod dao_contract {
    // Corresponds to 1. above, the global state
    struct State {
        dao_bullas: Vec<DaoBulla>,
        proposal_bullas: Vec<ProposalBulla>,
        proposal_nulls: Vec<ProposalNull>
    }

    // Corresponds to 2. mint()
    mod mint {
        // Prover specific
        struct Builder {
            ...
            // secret witness values for prover
            ...
        }

        impl Builder {
            fn new(...) -> Self {
                ...
            }

            fn build() -> FuncCall {
                ...
            }
        }

        // Verifier code
        struct CallData {
            ...
            // contains the function call data
            ...
        }
    }
}

There is a pipeline where the prover runs Builder::build() to create the FuncCall object that is then broadcast to the verifiers through the p2p network.

The CallData usually is the public values exported from a ZK proof. Essentially it is the data used by the verifier to check the function call for DAO::mint().

Atomic Transactions

Transactions represent several function call invocations that are atomic. If any function call fails, the entire tx is rejected. Additionally some smart contracts might impose additional conditions on the transaction's structure or other function calls (such as their call data).

pub struct Transaction {
    pub func_calls: Vec<FuncCall>,
    pub signatures: Vec<Vec<Signature>>,
    // pub proofs: Vec<Proof>,
}

Function calls represent mutations of the current active state to a new state.

pub struct FuncCall {
    pub contract_id: ContractId,
    pub func_id: FuncId,
    pub call_data: Box<dyn CallDataBase + Send + Sync>,
    pub proofs: Vec<Proof>,
}

The contract_id corresponds to the top level module for the contract which includes the global State.

The func_id of a function call corresponds to predefined objects in the submodules:

  • Builder creates the anonymized CallData. Ran by the prover.
  • CallData is the parameters used by the anonymized function call invocation. Verifiers have this.
  • state_transition() that runs the function call on the current state using the CallData.
  • apply() commits the update to the current state taking it to the next state.

An example of a contract_id could represent DAO or Money. Examples of func_id could represent DAO::mint() or Money::transfer().

Each function call invocation is ran using its own state_transition() function.

mod dao_contract {
    ...

    // DAO::mint() in the smart contract pseudocode
    mod mint {
        ...

        fn state_transition(states: &StateRegistry, func_call_index: usize, parent_tx: &Transaction) -> Result<Update> {
            // we could also change the state_transition() function signature
            // so we pass the func_call itself in
            let func_call = parent_tx.func_calls[func_call_index];
            let call_data = func_call.call_data;
            // It's useful to have the func_call_index within parent_tx because
            // we might want to enforce that it appears at a certain index exactly.
            // So we know the tx is well formed.

            // we can elide this with macro magic
            // dao_contract::mint::validate::CallData
            assert_eq((&**call_data).type_id(), TypeId::of::<CallData>());
            let func_call = func_call.call_data.downcast_ref::<CallData>();

            ...
        }
    }
}

The state_transition() has access to the entire atomic transaction to enforce correctness. For example chaining of function calls is used by the DAO::exec() smart contract function to execute moving money out of the treasury using Money::transfer() within the same transaction.

Additionally StateRegistry gives smart contracts access to the global states of all smart contracts on the network, which is needed for some contracts.

Note that during this step, the state is not modified. Modification happens after the state_transition() is run for all function call invocations within the transaction. Assuming they all pass successfully, the updates are then applied at the end. This ensures atomicity property of transactions.

mod dao_contract {
    ...

    // DAO::mint() in the smart contract pseudocode
    mod mint {
        ...

        // StateRegistry is mutable
        fn apply(states: &mut StateRegistry, update: Update) {
            ...
        }
    }
}

The transaction verification pipeline roughly looks like this:

  1. Loop through all function call invocations within the transaction:
    1. Lookup their respective state_transition() function based off their contract_id and func_id. The contract_id and func_id corresponds to the contract and specific function, such as DAO::mint().
    2. Call the state_transition() function and store the update. Halt if this function fails.
  2. Loop through all updates
    1. Lookup specific apply() function based off the contract_id and func_id.
    2. Call apply(update) to finalize the change.

ZK Proofs and Signatures

Lets review again the format of transactions.

pub struct Transaction {
    pub func_calls: Vec<FuncCall>,
    pub signatures: Vec<Vec<Signature>>,
    // pub proofs: Vec<Proof>,
}

And corresponding function calls.

pub struct FuncCall {
    pub contract_id: ContractId,
    pub func_id: FuncId,
    pub call_data: Box<dyn CallDataBase + Send + Sync>,
    pub proofs: Vec<Proof>,
}

As we can see the ZK proofs and signatures are separate from the actuall call_data interpreted by state_transition(). They are both automatically verified by the VM.

However for verification to work, the ZK proofs also need corresponding public values, and the signatures need the public keys. We do this in the CallDataBase trait by exporting these methods:

pub trait CallDataBase {
    // Public values for verifying the proofs
    // Needed so we can convert internal types so they can be used in Proof::verify()
    fn zk_public_values(&self) -> Vec<(String, Vec<DrkCircuitField>)>;

    // For upcasting to CallData itself so it can be read in state_transition()
    fn as_any(&self) -> &dyn Any;

    // Public keys we will use to verify transaction signatures.
    fn signature_public_keys(&self) -> Vec<PublicKey>;

    fn encode_bytes(
        &self,
        writer: &mut dyn std::io::Write,
    ) -> std::result::Result<usize, std::io::Error>;
}

These methods export the required values needed for the ZK proofs and signature verification from the actual call data itself.

For signature verification, the data we are verifying is simply the entire transactions minus the actual signatures. That's why the signatures are a separate top level field in the transaction.

Parallelisation Techniques

Since verification is done through state_transition() which returns an update that is then committed to the state using apply(), we can verify all transactions in a block in parallel.

To enable calling another transaction within the same block (such as flashloans), we can add a special depends field within the tx that makes a tx wait on another tx before being allowed to verify. This causes a small deanonymization to occur but brings a massive scalability benefit to the entire system.

ZK proof verification should be done automatically by the system. Any proof that fails marks the entire tx as invalid, and the tx is discarded. This should also be parallelized.

Tooling

DarkFi Fullnode Daemon

darkfid is the darkfi fullnode. It manages the blockchain, validates transactions and remains connected to the p2p network.

Clients can connect over localhost RPC or secure socket and perform these functions:

  • Get the node status and modify settings realtime.
  • Query the blockchain.
  • Broadcast txs to the p2p network.
  • Get tx status, query the mempool and interact with components.

darkfid does not have any concept of keys or wallet functionality. It does not manage keys.

Low Level Client

Clients manage keys and objects. They make queries to darkfid, and receive notes encrypted to their public keys.

Their design is usually specific to their application but modular.

They also expose a high level simple to use API corresponding exactly to their commands so that product teams can easily build an application. They will use the command line tool as an interactive debugging application and point of reference.

NOTE: should the API use byte arrays or hex strings?

The API should be well documented with all arguments explained. Likewise for the commands help text.

Command cheatsheets and example sessions are strongly encouraged.

Clients

This section gives information on DarkFi's clients, such as darkfid and cashierd. Currently this section offers documentation on the client's RPC API.

darkfid JSON-RPC API

blockchain methods

blockchain.get_slot

Queries the blockchain database for a block in the given slot. Returns a readable block upon success.
[src]

--> {"jsonrpc": "2.0", "method": "blockchain.get_slot", "params": [0], "id": 1}
<-- {"jsonrpc": "2.0", "result": {...}, "id": 1}

blockchain.merkle_roots

Queries the blockchain database for all available merkle roots.
[src]

--> {"jsonrpc": "2.0", "method": "blockchain.merkle_roots", "params": [], "id": 1}
<-- {"jsonrpc": "2.0", "result": [..., ..., ...], "id": 1}

blockchain.subscribe_blocks

Initializes a subscription to new incoming blocks. Once a subscription is established, darkfid will send JSON-RPC notifications of new incoming blocks to the subscriber.
[src]

--> {"jsonrpc": "2.0", "method": "blockchain.subscribe_blocks", "params": [], "id": 1}
<-- {"jsonrpc": "2.0", "method": "blockchain.subscribe_blocks", "params": [`blockinfo`]}

blockchain.lookup_zkas

Performs a lookup of zkas bincodes for a given contract ID and returns all of them, including their namespace.
[src]

--> {"jsonrpc": "2.0", "method": "blockchain.lookup_zkas", "params": ["6Ef42L1KLZXBoxBuCDto7coi9DA2D2SRtegNqNU4sd74"], "id": 1}
<-- {"jsonrpc": "2.0", "result": [["Foo", [...]], ["Bar", [...]]], "id": 1}

tx methods

tx.simulate

Simulate a network state transition with the given transaction. Returns true if the transaction is valid, otherwise, a corresponding error.
[src]

--> {"jsonrpc": "2.0", "method": "tx.simulate", "params": ["base58encodedTX"], "id": 1}
<-- {"jsonrpc": "2.0", "result": true, "id": 1}

tx.broadcast

Broadcast a given transaction to the P2P network. The function will first simulate the state transition in order to see if the transaction is actually valid, and in turn it will return an error if this is the case. Otherwise, a transaction ID will be returned.
[src]

--> {"jsonrpc": "2.0", "method": "tx.broadcast", "params": ["base58encodedTX"], "id": 1}
<-- {"jsonrpc": "2.0", "result": "txID...", "id": 1}

wallet methods

wallet.query_row_single

Attempts to query for a single row in a given table. The parameters given contain paired metadata so we know how to decode the SQL data. An example of params is as such:

params[0] -> "sql query"
params[1] -> column_type
params[2] -> "column_name"
...
params[n-1] -> column_type
params[n] -> "column_name"

This function will fetch the first row it finds, if any. The column_type field is a type available in the WalletDb API as an enum called QueryType. If a row is not found, the returned result will be a JSON-RPC error. NOTE: This is obviously vulnerable to SQL injection. Open to interesting solutions.
[src]

--> {"jsonrpc": "2.0", "method": "wallet.query_row_single", "params": [...], "id": 1}
<-- {"jsonrpc": "2.0", "result": ["va", "lu", "es", ...], "id": 1}

wallet.query_row_multi

Attempts to query for all available rows in a given table. The parameters given contain paired metadata so we know how to decode the SQL data. They're the same as above in wallet.query_row_single. If there are any values found, they will be returned in a paired array. If not, an empty array will be returned.
[src]

--> {"jsonrpc": "2.0", "method": "wallet.query_row_multi", "params": [...], "id": 1}
<-- {"jsonrpc": "2.0", "result": [["va", "lu"], ["es", "es"], ...], "id": 1}

wallet.exec_sql

Executes an arbitrary SQL query on the wallet, and returns true on success. params[1..] can optionally be provided in pairs like in wallet.query_row_single.
[src]

--> {"jsonrpc": "2.0", "method": "wallet.exec_sql", "params": ["CREATE TABLE ..."], "id": 1}
<-- {"jsonrpc": "2.0", "result": true, "id": 1}

misc methods

ping

Returns a pong to the ping request.
[src]

--> {"jsonrpc": "2.0", "method": "ping", "params": [], "id": 1}
<-- {"jsonrpc": "2.0", "result": "pong", "id": 1}

clock

Returns current system clock in Timestamp format.
[src]

--> {"jsonrpc": "2.0", "method": "clock", "params": [], "id": 1}
<-- {"jsonrpc": "2.0", "result": {...}, "id": 1}

cashierd JSON-RPC API

deposit

Executes a deposit request given network and token_id. Returns the address where the deposit shall be transferred to.
[src]

--> {"jsonrpc": "2.0", "method": "deposit", "params": ["network", "token", "publickey"], "id": 1}
<-- {"jsonrpc": "2.0", "result": "Ht5G1RhkcKnpLVLMhqJc5aqZ4wYUEbxbtZwGCVbgU7DL", "id": 1}

withdraw

Executes a withdraw request given network, token_id, publickey and amount. publickey is supposed to correspond to network. Returns the transaction ID of the processed withdraw.
[src]

--> {"jsonrpc": "2.0", "method": "withdraw", "params": ["network", "token", "publickey", "amount"], "id": 1}
<-- {"jsonrpc": "2.0", "result": "txID", "id": 1}

features

Returns supported cashier features, like network, listening ports, etc.
[src]

--> {"jsonrpc": "2.0", "method": "features", "params": [], "id": 1}
<-- {"jsonrpc": "2.0", "result": {"network": ["btc", "sol"]}, "id": 1}

faucetd JSON-RPC API

airdrop

Processes an airdrop request and airdrops requested token and amount to address. Returns the transaction ID upon success. Params: 0: base58 encoded address of the recipient 1: Amount to airdrop in form of f64 2: base58 encoded token ID to airdrop
[src]

--> {"jsonrpc": "2.0", "method": "airdrop", "params": ["1DarkFi...", 1.42, "1F00b4r..."], "id": 1}
<-- {"jsonrpc": "2.0", "result": "txID", "id": 1}

zkas

zkas is a compiler for the Halo2 zkVM language used in DarkFi.

The current implementation found in the DarkFi repository inside src/zkas is the reference compiler and language implementation. It is a toolchain consisting of a lexer, parser, static and semantic analyzers, and a binary code compiler.

The main.rs file shows how this toolchain is put together to produce binary code from source code.

Architecture

The main part of the compilation happens inside the parser. New opcodes can be added by extending opcode.rs.

    // The lexer goes over the input file and separates its content into
    // tokens that get fed into a parser.
    let lexer = Lexer::new(filename, source.chars());
    let tokens = lexer.lex();

    // The parser goes over the tokens provided by the lexer and builds
    // the initial AST, not caring much about the semantics, just enforcing
    // syntax and general structure.
    let parser = Parser::new(filename, source.chars(), tokens);
    let (namespace, constants, witnesses, statements) = parser.parse();

    // The analyzer goes through the initial AST provided by the parser and
    // converts return and variable types to their correct forms, and also
    // checks that the semantics of the ZK script are correct.
    let mut analyzer = Analyzer::new(filename, source.chars(), constants, witnesses, statements);
    analyzer.analyze_types();

    if args.interactive {
        analyzer.analyze_semantic();
    }

    if args.evaluate {
        println!("{:#?}", analyzer.constants);
        println!("{:#?}", analyzer.witnesses);
        println!("{:#?}", analyzer.statements);
        println!("{:#?}", analyzer.stack);
        exit(0);
    }

    let compiler = Compiler::new(
        filename,
        source.chars(),
        namespace,
        analyzer.constants,
        analyzer.witnesses,
        analyzer.statements,
        analyzer.literals,
        !args.strip,
    );

    let bincode = compiler.compile();

zkas bincode

The bincode design for zkas is the compiled code in the form of a binary blob, that can be read by a program and fed into the VM.

Our programs consist of four sections: constant, literal, contract, and circuit. Our bincode represents the same. Additionally, there is an optional section called .debug which can hold debug info related to the binary.

We currently keep all variables on one stack, and literals on another stack. Therefore before each STACK_INDEX we prepend STACK_TYPE so the VM is able to know which stack it should do lookup from.

The compiled binary blob has the following layout:

MAGIC_BYTES
BINARY_VERSION
NAMESPACE
.constant
CONSTANT_TYPE CONSTANT_NAME 
CONSTANT_TYPE CONSTANT_NAME 
...
.literal
LITERAL
LITERAL
...
.contract
WITNESS_TYPE
WITNESS_TYPE
...
.circuit
OPCODE ARG_NUM STACK_TYPE STACK_INDEX ... STACK_TYPE STACK_INDEX
OPCODE ARG_NUM STACK_TYPE STACK_INDEX ... STACK_TYPE STACK_INDEX
...
.debug
TBD

Integers in the binary are encoded using variable-integer encoding. See the serial crate and module for our Rust implementation.

Sections

MAGIC_BYTES

The magic bytes are the file signature consisting of four bytes used to identify the zkas binary code. They consist of:

0x0b 0x01 0xb1 0x35

BINARY_VERSION

The binary code also contains the binary version to allow parsing potential different formats in the future.

0x02

NAMESPACE

This sector after MAGIC_BYTES and BINARY_VERSION contains the reference namespace of the code. This is the namespace used in the source code, e.g.:

constant "MyNamespace" { ... }
contract "MyNamespace" { ... }
circuit  "MyNamespace" { ... }

The string is serialized with variable-integer encoding.

.constant

The constants in the .constant section are declared with their type and name, so that the VM knows how to search for the builtin constant and add it to the stack.

.literal

The literals in the .literal section are currently unsigned integers that get parsed into a u64 type inside the VM. In the future this could be extended with signed integers, and strings.

.contract

The .contract section holds the circuit witness values in the form of WITNESS_TYPE. Their stack index is incremented for each witness as they're kept in order like in the source file. The witnesses that are of the same type as the circuit itself (typically Base) will be loaded into the circuit as private values using the Halo2 load_private API.

.circuit

The .circuit section holds the procedural logic of the ZK proof. In here we have statements with opcodes that are executed as understood by the VM. The statements are in the form of:

OPCODE ARG_NUM STACK_TYPE STACK_INDEX ... STACK_TYPE STACK_INDEX

where:

ElementDescription
OPCODEThe opcode we wish to execute
ARG_NUMThe number of arguments given to this opcode
(Note the VM should be checking the correctness of this as well)
STACK_TYPEType of the stack to do lookup from (variables or literals)
(This is prepended to every STACK_INDEX)
STACK_INDEXThe location of the argument on the stack.
(This is supposed to be repeated ARG_NUM times)

In case an opcode has a return value, the value shall be pushed to the stack and become available for later references.

.debug

TBD

Syntax Reference

Variable Types

TypeDescription
EcPointElliptic Curve Point.
EcFixedPointElliptic Curve Point (constant).
EcFixedPointBaseElliptic Curve Point in Base Field (constant).
BaseBase Field Element.
BaseArrayBase Field Element Array.
ScalarScalar Field Element.
ScalarArrayScalar Field Element Array.
MerklePathMerkle Tree Path.
Uint32Unsigned 32 Bit Integer.
Uint64Unsigned 64 Bit Integer.

Literal Types

TypeDescription
Uint64Unsigned 64 Bit Integer.

Opcodes

OpcodeDescription
EcAddElliptic Curve Addition.
EcMulElliptic Curve Multiplication.
EcMulBaseElliptic Curve Multiplication with Base.
EcMulShortElliptic Curve Multiplication with a u64 wrapped in a Scalar.
EcGetXGet X Coordinate of Elliptic Curve Point.
EcGetYGet Y Coordinate of Elliptic Curve Point.
PoseidonHashPoseidon Hash of N Elements.
MerkleRootCompute a Merkle Root.
BaseAddBase Addition.
BaseMulBase Multiplication.
BaseSubBase Subtraction.
WitnessBaseWitness an unsigned integer into a Base.
RangeCheckPerform a (either 64bit or 253bit) range check over some Base
LessThanStrictStrictly compare if Base a is lesser than Base b
LessThanLooseLoosely compare if Base a is lesser than Base b
BoolCheckEnforce that a Base fits in a boolean value (either 0 or 1)
ConstrainEqualBaseConstrain equality of two Base elements from the stack
ConstrainEqualPointConstrain equality of two EcPoint elements from the stack
ConstrainInstanceConstrain a Base to a Circuit's Public Input.

Built-in Opcode Wrappers

OpcodeFunctionReturn
EcAddec_add(EcPoint a, EcPoint b)(EcPoint c)
EcMulec_mul(EcPoint a, EcPoint c)(EcPoint c)
EcMulBaseec_mul_base(Base a, EcFixedPointBase b)(EcPoint c)
EcMulShortec_mul_short(Base a, EcFixedPointShort b)(EcPoint c)
EcGetXec_get_x(EcPoint a)(Base x)
EcGetYec_get_y(EcPoint a)(Base y)
PoseidonHashposeidon_hash(Base a, ..., Base n)(Base h)
MerkleRootmerkle_root(Uint32 i, MerklePath p, Base a)(Base r)
BaseAddbase_add(Base a, Base b)(Base c)
BaseMulbase_mul(Base a, Base b)(Base c)
BaseSubbase_sub(Base a, Base b)(Base c)
WitnessBasewitness_base(123)(Base a)
RangeCheckrange_check(64, Base a)()
LessThanStrictless_than_strict(Base a, Base b)()
LessThanLooseless_than_loose(Base a, Base b)()
BoolCheckbool_check(Base a)()
ConstrainEqualBaseconstrain_equal_base(Base a, Base b)()
ConstrainEqualPointconstrain_equal_point(EcPoint a, EcPoint b)()
ConstrainInstanceconstrain_instance(Base a)()

Decoding the bincode

An example decoder implementation can be found in zkas' decoder.rs module.

Examples

This section holds practical and real-world examples of the use for zkas.

Sapling payment scheme

Sapling is a type of transaction which hides both the sender and receiver data, as well as the amount transacted. This means it allows a fully private transaction between two addresses.

Generally, the Sapling payment scheme consists of two ZK proofs - mint and burn. We use the mint proof to create a new coin , and we use the burn proof to spend a previously minted coin.

Mint proof

constant "Mint" {
	EcFixedPointShort VALUE_COMMIT_VALUE,
	EcFixedPoint VALUE_COMMIT_RANDOM,
	EcFixedPointBase NULLIFIER_K,
}

contract "Mint" {
	Base pub_x,
	Base pub_y,
	Base value,
	Base token,
	Base serial,
	Base coin_blind,
	Scalar value_blind,
	Scalar token_blind,
}

circuit "Mint" {
	# Poseidon hash of the coin
	C = poseidon_hash(pub_x, pub_y, value, token, serial, coin_blind);
	constrain_instance(C);

	# Pedersen commitment for coin's value
	vcv = ec_mul_short(value, VALUE_COMMIT_VALUE);
	vcr = ec_mul(value_blind, VALUE_COMMIT_RANDOM);
	value_commit = ec_add(vcv, vcr);
	# Since the value commit is a curve point, we fetch its coordinates
	# and constrain them:
	value_commit_x = ec_get_x(value_commit);
	value_commit_y = ec_get_y(value_commit);
	constrain_instance(value_commit_x);
	constrain_instance(value_commit_y);

	# Pedersen commitment for coin's token ID
	tcv = ec_mul_base(token, NULLIFIER_K);
	tcr = ec_mul(token_blind, VALUE_COMMIT_RANDOM);
	token_commit = ec_add(tcv, tcr);
	# Since token_commit is also a curve point, we'll do the same
	# coordinate dance:
	token_commit_x = ec_get_x(token_commit);
	token_commit_y = ec_get_y(token_commit);
	constrain_instance(token_commit_x);
	constrain_instance(token_commit_y);

	# At this point we've enforced all of our public inputs.
}

As you can see, the Mint proof basically consists of three operations. First one is hashing the coin , and after that, we create Pedersen commitments1 for both the coin's value and the coin's token ID. On top of the zkas code, we've declared two constant values that we are going to use for multiplication in the commitments.

The constrain_instance call can take any of our assigned variables and enforce a public input. Public inputs are an array (or vector) of revealed values used by verifiers to verify a zero knowledge proof. In the above case of the Mint proof, since we have five calls to constrain_instance, we would also have an array of five elements that represent these public inputs. The array's order must match the order of the constrain_instance calls since they will be constrained by their index in the array (which is incremented for every call).

In other words, the vector of public inputs could look like this:

let public_inputs = vec![
    coin,
    *value_coords.x(),
    *value_coords.y(),
    *token_coords.x(),
    *token_coords.y(),
];

And then the Verifier uses these public inputs to verify a given zero knowledge proof.

Coin

During the Mint phase we create a new coin , which is bound to the public key . The coin is publicly revealed on the blockchain and added to the Merkle tree.

Let be the coin's value, be the token ID, be the unique serial number for the coin, and be a random blinding value. We create a commitment (hash) of these elements and produce the coin in zero-knowledge:

An interesting thing to keep in mind is that this commitment is extensible, so one could fit an arbitrary amount of different attributes inside it.

Value and token commitments

To have some value for our coin, we ensure it's greater than zero, and then we can create a Pedersen commitment where is the blinding factor for the commitment, and and are two predefined generators:

The token ID can be thought of as an attribute we append to our coin so we can have a differentiation of assets we are working with. In practice, this allows us to work with different tokens, using the same zero-knowledge proof circuit. For this token ID, we can also build a Pedersen commitment where is the token ID, is the blinding factor, and and are predefined generators:

Pseudo-code

Knowing this we can extend our pseudo-code and build the before-mentioned public inputs for the circuit:

    let bincode = include_bytes!("../proof/mint.zk.bin");
    let zkbin = ZkBinary::decode(bincode)?;

    // ======
    // Prover
    // ======

    // Witness values
    let value = 42;
    let token_id = pallas::Base::random(&mut OsRng);
    let value_blind = pallas::Scalar::random(&mut OsRng);
    let token_blind = pallas::Scalar::random(&mut OsRng);
    let serial = pallas::Base::random(&mut OsRng);
    let coin_blind = pallas::Base::random(&mut OsRng);
    let public_key = PublicKey::from_secret(SecretKey::random(&mut OsRng));
    let (pub_x, pub_y) = public_key.xy();

    let prover_witnesses = vec![
        Witness::Base(Value::known(pub_x)),
        Witness::Base(Value::known(pub_y)),
        Witness::Base(Value::known(pallas::Base::from(value))),
        Witness::Base(Value::known(token_id)),
        Witness::Base(Value::known(serial)),
        Witness::Base(Value::known(coin_blind)),
        Witness::Scalar(Value::known(value_blind)),
        Witness::Scalar(Value::known(token_blind)),
    ];

    // Create the public inputs
    let msgs = [pub_x, pub_y, pallas::Base::from(value), token_id, serial, coin_blind];
    let coin = poseidon::Hash::<_, poseidon::P128Pow5T3, poseidon::ConstantLength<6>, 3, 2>::init()
        .hash(msgs);

    let value_commit = pedersen_commitment_u64(value, value_blind);
    let value_coords = value_commit.to_affine().coordinates().unwrap();

    let token_commit = pedersen_commitment_base(token_id, token_blind);
    let token_coords = token_commit.to_affine().coordinates().unwrap();

    let public_inputs =
        vec![coin, *value_coords.x(), *value_coords.y(), *token_coords.x(), *token_coords.y()];

    // Create the circuit
    let circuit = ZkCircuit::new(prover_witnesses, zkbin.clone());

    let proving_key = ProvingKey::build(13, &circuit);
    let proof = Proof::create(&proving_key, &[circuit], &public_inputs, &mut OsRng)?;

    // ========
    // Verifier
    // ========

    // Construct empty witnesses
    let verifier_witnesses = empty_witnesses(&zkbin);

    // Create the circuit
    let circuit = ZkCircuit::new(verifier_witnesses, zkbin);

    let verifying_key = VerifyingKey::build(13, &circuit);
    proof.verify(&verifying_key, &public_inputs)?;

Burn

constant "Burn" {
	EcFixedPointShort VALUE_COMMIT_VALUE,
	EcFixedPoint VALUE_COMMIT_RANDOM,
	EcFixedPointBase NULLIFIER_K,
}

contract "Burn" {
	Base secret,
	Base serial,
	Base value,
	Base token,
	Base coin_blind,
	Scalar value_blind,
	Scalar token_blind,
	Uint32 leaf_pos,
	MerklePath path,
	Base signature_secret,
}

circuit "Burn" {
	# Poseidon hash of the nullifier
	nullifier = poseidon_hash(secret, serial);
	constrain_instance(nullifier);

	# Pedersen commitment for coin's value
	vcv = ec_mul_short(value, VALUE_COMMIT_VALUE);
	vcr = ec_mul(value_blind, VALUE_COMMIT_RANDOM);
	value_commit = ec_add(vcv, vcr);
	# Since value_commit is a curve point, we fetch its coordinates
	# and constrain them:
	value_commit_x = ec_get_x(value_commit);
	value_commit_y = ec_get_y(value_commit);
	constrain_instance(value_commit_x);
	constrain_instance(value_commit_y);

	# Pedersen commitment for coin's token ID
	tcv = ec_mul_base(token, NULLIFIER_K);
	tcr = ec_mul(token_blind, VALUE_COMMIT_RANDOM);
	token_commit = ec_add(tcv, tcr);
	# Since token_commit is also a curve point, we'll do the same
	# coordinate dance:
	token_commit_x = ec_get_x(token_commit);
	token_commit_y = ec_get_y(token_commit);
	constrain_instance(token_commit_x);
	constrain_instance(token_commit_y);

	# Coin hash
	pub = ec_mul_base(secret, NULLIFIER_K);
	pub_x = ec_get_x(pub);
	pub_y = ec_get_y(pub);
	C = poseidon_hash(pub_x, pub_y, value, token, serial, coin_blind);

	# Merkle root
	root = merkle_root(leaf_pos, path, C);
	constrain_instance(root);

	# Finally, we derive a public key for the signature and
	# constrain its coordinates:
	signature_public = ec_mul_base(signature_secret, NULLIFIER_K);
	signature_x = ec_get_x(signature_public);
	signature_y = ec_get_y(signature_public);
	constrain_instance(signature_x);
	constrain_instance(signature_y);

	# At this point we've enforced all of our public inputs.
}

The Burn proof consists of operations similar to the Mint proof, with the addition of a Merkle root2 calculation. In the same manner, we are doing a Poseidon hash instance, we're building Pedersen commitments for the value and token ID, and finally we're doing a public key derivation.

In this case, our vector of public inputs could look like:

let public_inputs = vec![
    nullifier,
    *value_coords.x(),
    *value_coords.y(),
    *token_coords.x(),
    *token_coords.y(),
    merkle_root,
    *sig_coords.x(),
    *sig_coords.y(),
];

Nullifier

When we spend the coin, we must ensure that the value of the coin cannot be double spent. We call this the Burn phase. The process relies on a nullifier , which we create using the secret key for the public key and a unique random serial . Nullifiers are unique per coin and prevent double spending:

Merkle root

We check that the merkle root corresponds to a coin which is in the Merkle tree

Value and token commitments

Just like we calculated these for the Mint proof, we do the same here:

Public key derivation

We check that the secret key corresponds to a public key . Usually, we do public key derivation my multiplying our secret key with a genera tor , which results in a public key:

Pseudo-code

Knowing this we can extend our pseudo-code and build the before-mentioned public inputs for the circuit:

    let bincode = include_bytes!("../proof/burn.zk.bin");
    let zkbin = ZkBinary::decode(bincode)?;

    // ======
    // Prover
    // ======

    // Witness values
    let value = 42;
    let token_id = pallas::Base::random(&mut OsRng);
    let value_blind = pallas::Scalar::random(&mut OsRng);
    let token_blind = pallas::Scalar::random(&mut OsRng);
    let serial = pallas::Base::random(&mut OsRng);
    let coin_blind = pallas::Base::random(&mut OsRng);
    let secret = SecretKey::random(&mut OsRng);
    let sig_secret = SecretKey::random(&mut OsRng);

    // Build the coin
    let coin2 = {
        let (pub_x, pub_y) = PublicKey::from_secret(secret).xy();
        let messages = [pub_x, pub_y, pallas::Base::from(value), token_id, serial, coin_blind];

        poseidon::Hash::<_, poseidon::P128Pow5T3, poseidon::ConstantLength<6>, 3, 2>::init()
            .hash(messages)
    };

    // Fill the merkle tree with some random coins that we want to witness,
    // and also add the above coin.
    let mut tree = BridgeTree::<MerkleNode, 32>::new(100);
    let coin0 = pallas::Base::random(&mut OsRng);
    let coin1 = pallas::Base::random(&mut OsRng);
    let coin3 = pallas::Base::random(&mut OsRng);

    tree.append(&MerkleNode::from(coin0));
    tree.witness();
    tree.append(&MerkleNode::from(coin1));
    tree.append(&MerkleNode::from(coin2));
    let leaf_pos = tree.witness().unwrap();
    tree.append(&MerkleNode::from(coin3));
    tree.witness();

    let root = tree.root(0).unwrap();
    let merkle_path = tree.authentication_path(leaf_pos, &root).unwrap();
    let leaf_pos: u64 = leaf_pos.into();

    let prover_witnesses = vec![
        Witness::Base(Value::known(secret.inner())),
        Witness::Base(Value::known(serial)),
        Witness::Base(Value::known(pallas::Base::from(value))),
        Witness::Base(Value::known(token_id)),
        Witness::Base(Value::known(coin_blind)),
        Witness::Scalar(Value::known(value_blind)),
        Witness::Scalar(Value::known(token_blind)),
        Witness::Uint32(Value::known(leaf_pos.try_into().unwrap())),
        Witness::MerklePath(Value::known(merkle_path.try_into().unwrap())),
        Witness::Base(Value::known(sig_secret.inner())),
    ];

    // Create the public inputs
    let nullifier = Nullifier::from(poseidon_hash::<2>([secret.inner(), serial]));

    let value_commit = pedersen_commitment_u64(value, value_blind);
    let value_coords = value_commit.to_affine().coordinates().unwrap();

    let token_commit = pedersen_commitment_base(token_id, token_blind);
    let token_coords = token_commit.to_affine().coordinates().unwrap();

    let sig_pubkey = PublicKey::from_secret(sig_secret);
    let (sig_x, sig_y) = sig_pubkey.xy();

    let merkle_root = tree.root(0).unwrap();

    let public_inputs = vec![
        nullifier.inner(),
        *value_coords.x(),
        *value_coords.y(),
        *token_coords.x(),
        *token_coords.y(),
        merkle_root.inner(),
        sig_x,
        sig_y,
    ];

    // Create the circuit
    let circuit = ZkCircuit::new(prover_witnesses, zkbin.clone());

    let proving_key = ProvingKey::build(13, &circuit);
    let proof = Proof::create(&proving_key, &[circuit], &public_inputs, &mut OsRng)?;

    // ========
    // Verifier
    // ========

    // Construct empty witnesses
    let verifier_witnesses = empty_witnesses(&zkbin);

    // Create the circuit
    let circuit = ZkCircuit::new(verifier_witnesses, zkbin);

    let verifying_key = VerifyingKey::build(13, &circuit);
    proof.verify(&verifying_key, &public_inputs)?;
1

See section 3: The Commitment Scheme of Torben Pryds Pedersen's paper on Non-Interactive and Information-Theoretic Secure Verifiable Secret Sharing

Anonymous voting

Anonymous voting1 is a type of voting process where users can vote without revealing their identity, by proving they are accepted as valid voters.

The proof enables user privacy and allows for fully anonymous voting.

The starting point is a Merkle proof2, which efficiently proves that a voter's key belongs to a Merkle tree. However, using this proof alone would allow the organizer of a process to correlate each vote envelope with its voter's key on the database, so votes wouldn't be secret.

Vote proof

constant "Vote" {
	EcFixedPointShort VALUE_COMMIT_VALUE,
	EcFixedPoint VALUE_COMMIT_RANDOM,
	EcFixedPointBase NULLIFIER_K,
}

contract "Vote" {
	Base process_id_0,
	Base process_id_1,
	Base secret_key,
	Base vote,
	Scalar vote_blind,
	Uint32 leaf_pos,
	MerklePath path,
}

circuit "Vote" {
	# Nullifier hash
	process_id = poseidon_hash(process_id_0, process_id_1);
	nullifier = poseidon_hash(secret_key, process_id);
	constrain_instance(nullifier);

	# Public key derivation and hashing
	public_key = ec_mul_base(secret_key, NULLIFIER_K);
	public_x = ec_get_x(public_key);
	public_y = ec_get_y(public_key);
	pk_hash = poseidon_hash(public_x, public_y);

	# Merkle root
	root = merkle_root(leaf_pos, path, pk_hash);
	constrain_instance(root);

	# Pedersen commitment for vote
	vcv = ec_mul_short(vote, VALUE_COMMIT_VALUE);
	vcr = ec_mul(vote_blind, VALUE_COMMIT_RANDOM);
	vote_commit = ec_add(vcv, vcr);
	# Since vote_commit is a curve point, we fetch its coordinates
	# and constrain_them:
	vote_commit_x = ec_get_x(vote_commit);
	vote_commit_y = ec_get_y(vote_commit);
	constrain_instance(vote_commit_x);
	constrain_instance(vote_commit_y);
}

Our proof consists of four main operation. First we are hashing the nullifier using our secret key and the hashed process ID. Next, we derive our public key and hash it. Following, we take this hash and create a Merkle proof that it is indeed contained in the given Merkle tree. And finally, we create a Pedersen commitment3 for the vote choice itself.

Our vector of public inputs can look like this:

let public_inputs = vec![
    nullifier,
    merkle_root,
    *vote_coords.x(),
    *vote_coords.y(),
]

And then the Verifier uses these public inputs to verify the given zero-knowledge proof.

1

Specification taken from vocdoni franchise proof

3

See section 3: The Commitment Scheme of Torben Pryds Pedersen's paper on Non-Interactive and Information-Theoretic Secure Verifiable Secret Sharing

Miscellaneous tools

This section documents some miscellaneous tools provided in the DarkFi ecosystem.

vanityaddr

A tool for Vanity address generation for DarkFi keypairs. Given some prefix, the tool will bruteforce secret keys to find one which, when derived into an address, starts with a given prefix.

Usage

vanityaddr 0.3.0
Vanity address generation tool for DarkFi keypairs.

USAGE:
    vanityaddr [OPTIONS] <PREFIX>

ARGS:
    <PREFIX>    Prefix to search (must start with 1)

OPTIONS:
    -c                  Should the search be case-sensitive
    -h, --help          Print help information
    -t <THREADS>        Number of threads to use (defaults to number of available CPUs)
    -V, --version       Print version information

We can use the tool in our command line:

% vanityaddr 1Foo
[00:00:05] 53370 attempts

And the program will start crunching numbers. After a period of time, we will get JSON output containing an address, secret key, and the number of attempts it took to find the secret key.

{"address":"1FoomByzBBQywKaeBB5XPkAm5eCboh8K4CBhBe9uKbJm3kEiCS","attempts":78418,"secret":"0x16545da4a401adcd035ef51c8040acf5f4f1c66c0dd290bb5ec9e95991ae3615"}

P2P IRC

In DarkFi, we organize our communication using resilient and censorship-resistant infrastructure. For chatting, ircd is a peer-to-peer implementation of an IRC server in which any user can participate anonymously using any IRC frontend and by running the IRC daemon. ircd uses the DarkFi P2P engine to synchronize chats between hosts.

Installation

% git clone https://github.com/darkrenaissance/darkfi 
% cd darkfi
% make BINS=ircd
% sudo make install BINS=ircd

Follow the instructions in the README to ensure you have all the necessary dependenices.

Usage (DarkFi Network)

Upon installing ircd as described above, the preconfigured defaults will allow you to connect to the network and start chatting with the rest of the DarkFi community.

First, try to start ircd from your command-line so it can spawn its configuration file in place. The preconfigured defaults will autojoin you to the #dev channel, where the community is most active and talks about DarkFi development.

% ircd

After running it for the first time, ircd will create a configuration file you can review and potentially edit. It might be useful if you want to add other channels you want to autojoin (like #philosophy and #memes), or if you want to set a shared secret for some channel in order for it to be encrypted between its participants.

When done, you can run ircd for the second time in order for it to connect to the network and start participating in the P2P protocol:

% ircd

Clients

Weechat

In this section, we'll briefly cover how to use the Weechat IRC client to connect and chat with ircd.

Normally, you should be able to install weechat using your distribution's package manager. If not, have a look at the weechat git repository for instructions on how to install it on your computer.

Once installed, we can configure a new server which will represent our ircd instance. First, start weechat, and in its window - run the following commands (there is an assumption that irc_listen in the ircd config file is set to 127.0.0.1:6667):

/server add darkfi localhost/6667 -autoconnect
/save
/quit

This will set up the server, save the settings, and exit weechat. You are now ready to begin using the chat. Simply start weechat and everything should work.

Usage (Local Deployment)

These steps below are only for developers who wish to make a testing deployment. The previous sections are sufficient to join the chat.

Seed Node

First you must run a seed node. The seed node is a static host which nodes can connect to when they first connect to the network. The seed_session simply connects to a seed node and runs protocol_seed, which requests a list of addresses from the seed node and disconnects straight after receiving them.

The first time you run the program, a config file will be created in ~/.config/darkfi if your are using Linux or in ~/Library/Application Support/darkfi/ on MacOS. You must specify an inbound accept address in your config file to configure a seed node:

## P2P accept addresses
inbound=["127.0.0.1:11001"]

Note that the above config doesn't specify an external address since the seed node shouldn't be advertised in the list of connectable nodes. The seed node does not participate as a normal node in the p2p network. It simply allows new nodes to discover other nodes in the network during the bootstrapping phase.

Inbound Node

This is a node accepting inbound connections on the network but which is not making any outbound connections.

The external addresses are important and must be correct.

To run an inbound node, your config file must contain the following info:

## P2P accept addresses
inbound=["127.0.0.1:11002"]

## P2P external addresses
external_addr=["127.0.0.1:11002"]

## Seed nodes to connect to 
seeds=["127.0.0.1:11001"]

Outbound Node

This is a node which has 8 outbound connection slots and no inbound connections. This means the node has 8 slots which will actively search for unique nodes to connect to in the p2p network.

In your config file:

## Connection slots
outbound_connections=8

## Seed nodes to connect to 
seeds=["127.0.0.1:11001"]

Attaching the IRC Frontend

Assuming you have run the above 3 commands to create a small model testnet, and both inbound and outbound nodes above are connected, you can test them out using weechat.

To create separate weechat instances, use the --dir command:

weechat --dir /tmp/a/
weechat --dir /tmp/b/

Then in both clients, you must set the option to connect to temporary servers:

/set irc.look.temporary_servers on

Finally you can attach to the local IRCd instances:

/connect localhost/6667
/connect localhost/6668

And send messages to yourself.

Running a Fullnode

See the script script/run_node.sh for an example of how to deploy a full node which does seed session synchronization, and accepts both inbound and outbound connections.

Global Buffer

Copy this script to ~/.weechat/python/autoload/, and you will create a single buffer which aggregates messages from all channels. It's useful to monitor activity from all channels without needing to flick through them.

Ircd Specification

Ircd use Hashchain to maintain the synchronization between nodes. The messages are handled as events in Ircd network.

PrivMsgEvent

This is the main message type inside Ircd. The PrivMsgEvent is an event action.

DescriptionData TypeComments
nicknameStringThe nickname for the sender (must be less than 32 chars)
targetStringThe target for the message (recipient)
messageStringThe actual content of the message

ChannelInfo

Preconfigured channel in the configuration file.

In the TOML configuration file, the channel is set as such:

[channel."#dev"]
secret = "GvH4kno3kUu6dqPrZ8zjMhqxTUDZ2ev16EdprZiZJgj1"
topic = "DarkFi Development Channel"
DescriptionData TypeComments
topicStringOptional topic for the channel
secretStringOptional NaCl box for the channel, used for {en,de}cryption.
joinedboolIndicate whether the user has joined the channel
namesVecAll nicknames which are visible on the channel

ContactInfo

Preconfigured contact in the configuration file.

In the TOML configuration file, the contact is set as such:

[contact."nick"]
pubkey = "7CkVuFgwTUpJn5Sv67Q3fyEDpa28yrSeL5Hg2GqQ4jfM"
DescriptionData TypeComments
pubkeyStringA Public key for the contact to encrypt the message

IrcConfig

The base Irc configuration for each new IrcClient.

DescriptionData TypeComments
is_nick_initboolConfirmation of receiving /nick command
is_user_initboolConfirmation of receiving /user command
is_cap_endboolIndicate whether the irc client finished the Client Capability Negotiation
is_pass_initboolConfirmation of checking the password in the configuration file
is_registeredboolIndicate the IrcClient is initialized and ready to sending/receiving messages
nicknameStringThe irc client nickname
passwordStringThe password for the irc client. (it could be empty)
private_keyOptionA private key to decrypt direct messages from contacts
capabilitiesHashMap<String, bool>A list of capabilities for the irc clients and the server to negotiate
auto_channelsVecAuto join channels for the irc clients
channelsHashMap<String, ChannelInfo>A list of preconfigured channels in the configuration file
contactsHashMap<String, ContactInfo>A list of preconfigured contacts in the configuration file for direct message

IrcServer

The server start listening to an address specifed in the configuration file.

For each irc client get connected, an IrcClient instance created.

DescriptionData TypeComments
settingsSettingsThe base settings parsed from the configuration file
clients_subscriptionsSubscriberPtr<ClientSubMsg>Channels to notify the IrcClients about new data

IrcClient

The IrcClient handle all irc opeartions and commands from the irc client.

DescriptionData TypeComments
write_streamWriteHalfA writer for sending data to the connection stream
read_streamReadHalfRead data from the connection stream
addressSocketAddrThe actual address for the irc client connection
irc_configIrcConfigBase configuration for irc
server_notifierChannel<(NotifierMsg, u64)>A Channel to notify the server about a new data from the irc client
subscriptionSubscription<ClientSubMsg>A channel to receive notification from the server

Communications between the server and the clients

Two Communication channels get initialized by the server for every new IrcClient.

The channel Channel<(NotifierMsg, u64)> used by the IrcClient to notify the server about new messages/queries received from the irc client.

The channel Subscription<ClientSubMsg> used by the server to notify IrcClients about new messages/queries fetched from the View.

ClientSubMsg

enum ClientSubMsg {
	Privmsg(`PrivMsgEvent`),
	Config(`IrcConfig`),	
}

NotifierMsg

enum NotifierMsg {
	Privmsg(`PrivMsgEvent`),
	UpdateConfig,
}

Configuring a Private chat between users

Any two users on the ircd server can establish a fully encrypted communication medium between each other using a basic keypair setup.

Configuring ircd_config.toml

Generate a keypair using the command

% ircd --gen-keypair

This is the Private key used for encryption of messages across the network.

Save the privateKey safely & add it to the ircd_config.toml file as shown below.

[private_key.”your_private_key_goes_here”]

To share your public key with a user over the ircd server you can use the following command.

 /query User_A  “Hi this is my publickey: XXXXXX"

Note: This message will be publically visible on the IRC chat i.e anyone running the irc demon can view these messages in their logs.

See the example ircd_config.toml for more details

Example

Lets start by configuring our contacts list in the generated ircd_config.toml file (you can also refer to the examples written in the comments of the toml file)

[contact.”User_A”]
contact_pubkey = “XXXXXXX”
[contact.”User_B”]
contact_pubkey = “YYYYYYY”

Note: After configuring our ircd_config.toml file, you will need to restart your irc demon for the changes to reflect.

Lets see an Example where 'User_A' sends “Hi” message to 'User_B' using the /msg command

 /msg User_B Hi

IRCD logs of 'User_A'

9:36:59 [INFO] [CLIENT 127.0.0.1:xxxx] Msg: PRIVMSG User_B :Hi
09:36:59 [INFO] [CLIENT 127.0.0.1:xxxx] (Plain) PRIVMSG User_B :Hi
09:36:59 [INFO] [CLIENT 127.0.0.1:57964] (Encrypted) PRIVMSG: Privmsg { id: 12345, nickname: “xxxxxxx”, target: “xxxxx”, message: “xxxxxx”, timestamp: 1665481019, term: 0, read_confirms: 0 }
09:36:59 [INFO] [P2P] Broadcast: Privmsg { id: 7563042059426128593, nickname: “xxxx”, target: “xxxxx”, message: “xxxx”, timestamp: 1665481019, term: 0, read_confirms: 0 }

IRCD logs of 'User_B'

09:36:59 [INFO] [P2P] Received: Privmsg { id: 123457, nickname: “xxxx”, target: “xxxx”, message: “xxxx”, timestamp: 1665481019, term: 0, read_confirms: 0 }
09:36:59 [INFO] [P2P] Decrypted received message: Privmsg { id: 123457, nickname: "User_A", target: "User_B", message: "Hi", timestamp: 1665481019, term: 0, read_confirms: 0 }    

Note for Weechat Client Users: When you private message somone as shown above, the buffer will not pop in weechat client until you receive a reply from that person. For example here 'User_A' will not see any new buffer on his irc interface for the recent message which he just send to 'User_B' until 'User_B' replies, but 'User_B' will get a buffer shown on his irc client with the message 'Hi'.

Reply from 'User_B' to 'User_A'

/msg User_A welcome!

IRCD logs of 'User_B'

10:25:45 [INFO] [CLIENT 127.0.0.1:57396] Msg: PRIVMSG User_A :welcome! 
10:25:45 [INFO] [CLIENT 127.0.0.1:57396] (Plain) PRIVMSG User_A :welcome! 
10:25:45 [INFO] [CLIENT 127.0.0.1:57396] (Encrypted) PRIVMSG: Privmsg { id: 123458, nickname: “xxxx”, target: “xxxx”, message: “yyyyyyy”, timestamp: 1665483945, term: 0, read_confirms: 0 }
10:25:45 [INFO] [P2P] Broadcast: Privmsg { id: 123458, nickname: “xxxxx”, target: “xxxxx”, message: “yyyyyyyy”, timestamp: 1665483945, term: 0, read_confirms: 0 }

IRCD logs of 'User_A'

10:25:46 [INFO] [P2P] Received: Privmsg { id: 123458, nickname: “xxxxxxx”, target: “xxxxxx”, message: “yyyyyy”, timestamp: 1665483945, term: 0, read_confirms: 0 }
10:25:46 [INFO] [P2P] Decrypted received message: Privmsg { id: 123458, nickname: "User_B”, target: "User_A”, message: "welcome! ", timestamp: 1665483945, term: 0, read_confirms: 0 }

Tau

Encrypted tasks management app using peer-to-peer network.
Multiple users can collaborate by working on the same tasks, and all users will have synced tasks.

Install

% git clone https://github.com/darkrenaissance/darkfi 
% cd darkfi
% make BINS="taud tau"
% sudo make install "BINS=taud tau"

Usage

To run your own instance check Local Deployment

% tau --help 
tau 0.3.0

USAGE:
    tau [OPTIONS] [FILTERS]... [SUBCOMMAND]

ARGS:
    <FILTERS>...    Search filters (zero or more)                                 

OPTIONS:
    -e, --endpoint <ENDPOINT>    taud JSON-RPC endpoint [default: tcp://127.0.0.1:23330]
    -h, --help                   Print help information
    -v                           Increase verbosity (-vvv supported)
    -V, --version                Print version information

SUBCOMMANDS:
    add        Add a new task.                                                    
    comment    Set or Get comment for task(s)
    export     Export tasks to a specified directory
    help       Print this message or the help of the given subcommand(s)
    import     Import tasks from a specified directory
    info       Get all data about selected task(s)
    list       List tasks
    log        Log drawdown
    modify     Modify/Edit an existing task
    open       Open task(s)
    pause      Pause task(s)
    start      Start task(s)
    stop       Stop task(s)
    switch     Switch workspace
% tau [SUBCOMMAND] --help

Quick start

Add tasks

% tau add Review tau usage desc:description	# will add a new task named
%						# "Review tau usage" with
%						# "description" in its desc filed
% tau add Second task assign:dave 	# will add a new task and assign it
%					# to "dave".
%					# Note: not having "desc:" key
% 					# will pop up your OS editor
%					# configured in \$EDITOR env var,
%					# this is recommended for
%					# formatting reasons and
%					# will be used through this demo.
% tau add Third task project:tau rank:1.1
% tau add Fourth task assign:dave project:tau due:1509 rank:2.5
% tau add Five

List tasks

% tau				# all non-stop tasks
% tau list			# all non-stop tasks
% tau 1-3			# tasks 1 to 3
% tau 1,2 state:open		# tasks 1 and 2 and if they are open
% tau rank:gt:2			# all tasks that have rank greater than 2
% tau due.not:today		# all tasks that thier due date is not today
% tau due.after:0909		# all tasks that thier due date is after September 9th
% tau assign:dave		# tasks that assign field is "dave"

Filtering tasks

Note: mod commands are: start, open, pause, stop and modify.

Note: All filters from the previous section could work with mod commands.

% tau 5 stop			# will stop task 5
% tau 1,3 start			# start 1 and 3
% tau 2 pause			# pause 2
% tau 2,4 modify due:2009	# edit due to September in tasks 2 and 4 
% tau 1-4 modify project:tau	# edit project to tau in tasks 1,2,3 and 4
% tau state:pause open		# open paused tasks
% tau 3 info			# show information about task 3 (does not modify)

Comments

% tau 1 comment "content foo bar"	# will add a comment to task 1
% tau 3 comment				# will show comments on task 3 

Log drawdown

% tau log 0922			# will list assignees of stopped tasks
% tau log 0922 [<Assignee>]	# will draw a heatmap of stopped tasks for [Assignee]

Export and Import

% tau export ~/example_dir	# will save tasks json files to the path
% tau import ~/example_dir	# will reload saved json files from the path

Switch workspace

% tau switch darkfi	# darkfi workspace needs to be configured in config file

Local Deployment

Seed Node

First you must run a seed node. The seed node is a static host which nodes can connect to when they first connect to the network. The seed_session simply connects to a seed node and runs protocol_seed, which requests a list of addresses from the seed node and disconnects straight after receiving them.

in config file:

	## P2P accept addresses
	inbound=["127.0.0.1:11001"] 

Note that the above config doesn't specify an external address since the seed node shouldn't be advertised in the list of connectable nodes. The seed node does not participate as a normal node in the p2p network. It simply allows new nodes to discover other nodes in the network during the bootstrapping phase.

Inbound Node

This is a node accepting inbound connections on the network but which is not making any outbound connections.

The external addresses are important and must be correct.

in config file:
	
	## P2P accept addresses
	inbound=["127.0.0.1:11002"]
	
	## P2P external addresses
	external_addr=["127.0.0.1:11002"]

	## Seed nodes to connect to 
	seeds=["127.0.0.1:11001"]

Outbound Node

This is a node which has 8 outbound connection slots and no inbound connections. This means the node has 8 slots which will actively search for unique nodes to connect to in the p2p network.

in config file:

	## Connection slots
	outbound_connections=8

	## Seed nodes to connect to 

Event Graph

The event graph represents sequential events in an asynchronous environment.

Events can form small forks which should be quickly reconciled as new nodes are added to the structure and pull them in.

Ties are broken using the timestamps inside the events.

The main purpose of the graph is synchronization. This allows nodes in the network maintain a fully synced store of objects. How those objects are interpreted is up to the application.

We add a little more information about the objects which is that they are events with a timestamp, which allows our algorithm to be more intelligent.

Each node is read-only and this is an append-only data structure. However the application may wish to prune old data from the store to conserve memory.

Synchronization

Nodes in the event graph are active, whereas nodes not yet in the graph are orphans.

When node A receives an event from node B, it will check whether all parents are in the active pool. If there are missing parents then:

  1. Check whether the missing parents exist in the orphans pool.
    1. If they have missing parents (they should), then request their missing parent events from node B.
  2. If the missing parents are not in the orphans pool:
    1. Add this event to the orphans pool.
    2. Request the missing parent events from node B.

Once a node is successfully added to the active pool, and linked in the event graph, then we call reorganize(). This function loops through all the orphans, and tries to relink them with the active pool. If there are any missing parents, then they are added back to the orphan pool.

Creating an Event

In this example A creates a new event. Since the event is new, it is impossible for any nodes in the network to possess it, so A does not need to send an inv.

  1. A creates a new event.
  2. A sends event to
  3. For each in :
    1. Create an inv representing the event.
    2. Broadcast to all connected nodes p2p.broadcast(inv).
  4. A waits for 3 nodes to respond back with the inv confirming they received it.
  5. Until A receives the inv confirms, it will wait for 1 minute and then resend the event message.

Upon receiving an inv:

  1. Check if we already have the event. If not then reply back with getevent.
  2. The node receives getevent, and sends event back.

So in this diagram, A will send event to . Each will respond back to A with inv, and A will stop sending event. Each one of also receive inv, and since they don't have the event, they will send back to , a getevent message. will send them the event.

Genesis Event

All nodes start with a single hardcoded genesis event in their graph. The application layer should ignore this event. This serves as the origin event for synchronization.

Network Protocol

Common Structures

EventId

type EventId = [u8; 32];

inv

Inventory vectors are used for notifying other nodes about objects they have or data which is being requested.

DescriptionData TypeComments
invsVec<EventId>Inventory items

Upon receiving an unknown inventory object, a node will issue getevent.

getevent

Requests event data from a node.

DescriptionData TypeComments
invsVec<EventId>Inventory items

event

Event object data. This is either sent when a new event is created, or in response to getevent.

DescriptionData TypeComments
parentsVec<EventId>Parent events
timestampu64Event timestamp
actionu8Type of event
dataVec<u8>Event specific data

getheads

This message is only sent the first time a node connects to the network. It uses this message to synchronize with the current network state.

Once updated, a node uses the messages above to stay synchronized.

DescriptionData TypeComments
invsVec<EventId>Inventory items

Structures

EventId

Hash of Event

type EventId = [u8; 32];

EventAction

The Event could have many actions according to the underlying data.

enum EventAction { ... };

Event

DescriptionData TypeComments
previous_event_hashEventIdHash of the previous Event
actionEventActionEvent's action
timestampu64Event's timestamp

EventNode

DescriptionData TypeComments
parentOption<EventId>Only current root has this set to None
eventEventThe Event itself
childrenVec<EventId>The Events which has parent as this Event hash

Model

The Model consists of chains (EventNodes) structured as a tree; whereby, each chain has an Event-based list. To maintain a strict order of chains, each Event depends on the hash of the previous Event. All of the chains share a root Event to preserve the tree structure.

DescriptionData TypeComments
current_rootEventIdThe root Event for the tree
orphansHashMap<EventId, Event>Recently added Events
event_mapHashMap<EventId, EventNode>The actual tree
events_queueEventsQueueCommunication channel

View

The View checks the Model for new Events and then dispatches these Events to the clients.

Events are sorted according to the timestamp attached to each Event.

DescriptionData TypeComments
seenHashMap<EventId, Event>A list of Events

EventsQueue

The EventsQueue used to transport the event from Model to View.

The Model fills the EventsQueue with the new Event, while the View continuously fetches Events from queue.

Architecture

Tau uses Model–view software architecture. All of the operations, main data structures, and message handling from the network protocol happen on the Model side. Further, this keeps the View independent of the Model and allows the View to focus on receiving continuous updates from it.

Add new Event

Upon receiving a new Event from the network protocol, the Event will be added to the orphans list.

After the ancestor of the new orphan is found, the orphan Event will be added to the chain according to its ancestor.

For example: in Example1 below, an Event is added to the first chain if its previous hash is Event-A1.

Remove old leaves

Remove leaves which are too far from the head leaf (the leaf in the longest chain).

The depth difference from the common ancestor between a leaf to be removed and a head leaf must be greater than MAX_DEPTH.

Update the root

Finding the highest common ancestor for the leaves and assign it as the root for the tree.

The highest common ancestor must have a height greater than MAX_HEIGHT.

Example1

data structure

Network Protocol

The protocol checks that Events properly broadcast through the network before adding Events to the Model.

The read_confirms inside each Event indicate how many times the Event has been read from other nodes in the network.

The protocol classifies the Events by their state:

Unread: read_confirms < MAX_CONFIRMS
Read:	read_confirms >= MAX_CONFIRMS

Inv

Inventory vectors notify other nodes about objects they have or data which is being requested.

DescriptionData TypeComments
invsVec<[u8; 32]>Inventory items

Receiving an Inv message

Allows a node to advertise its knowledge of one or more objects. It can be received unsolicited or in reply to getevents.

An Inv message is a confirmation from a node in the network that the Event has been read.

Confirmation for an Event does not exist in the UnreadEvents list. Instead, the protocol sends a GetData message to request the missing Event.

The protocol updates the Event in the UnreadEvents list by increasing the read_confirms by one.

The updated Event state changes to read when the read_confirms exceed MAX_CONFIRMS. Then, the UnreadEvents list removes the Event and adds it to the Model.

The protocol rebroadcasts the received Inv to the network.

Sending an Inv message

Upon receiving an Event with unread status from the network, the protocol sends back an Inv message to confirm that the Event has been read.

GetData

DescriptionData TypeComments
eventsVec<EventId>A list of EventIds

Receiving a GetData message

The protocol searches in both Model and UnreadEvents for the requested Events in GetData message.

UnreadEvents

DescriptionData TypeComments
MessagesHashMap<EventId, Event>Hold all the Events that have broadcasted to other nodes but haven't confirmed yet

Add new Event to UnreadEvents

To add an Event to UnreadEvents, the protocol first must check the validity of Event.

The Event is not valid in the network if it's either too far in the future or in the past.

Updating UnreadEvents list

The protocol continuously broadcasts unread Events to the network, after a certain period of time (SEND_UNREAD_EVENTS_INTERVAL), until the state of Event updates to read.

SyncEvent

DescriptionData TypeComments
LeavesVec<EventId>Hash of Events

Synchronization

To achieve complete synchronization between nodes, the protocol sends a SyncEvent message every 2 seconds to other nodes in the network.

The SyncEvent contains the hashes of Events set in the leaves of Model's tree.

On receiving SyncEvent message, the leaves in SyncEvent should match the leaves in the Model's tree; otherwise, the protocol sends Events which are the children of Events in SyncEvent.

Seen

This prevents receiving duplicate objects. The list contains only 2^16 ids.

DescriptionData TypeComments
IdsVec<ObjectId>Contains objects ids

Receiving a new Event

The new received Event with unread status is added to the UnreadEvents buffer after increasing the read_confirms by one.

The Event with read status is added to the Model.

The protocol broadcasts the received Event to the network, again. This ensures all nodes in the network get the Event.

Sending an Event

A new created Event has unread status with read_confirms equal to 0.

The protocol broadcasts the Event to the network after adding it to the UnreadEvents.

Add new Event to Model

For the Event to be successfully added to the Model, the protocol checks if the previous Event's hash inside the Event exists in the Model.

In case the previous Event check fails, the protocol sends a GetData message requesting the previous Event.

Darkwiki

Collaborative wiki using peer-to-peer network and raft consensus.

Install

% git clone https://github.com/darkrenaissance/darkfi
% cd darkfi
% make BINS="darkwiki darkwikid"
% sudo make install BINS="darkwikid darkwiki"

Usage

1 - Once Darkwiki get installed, darkwiki daemon must run in the background:

% darkwikid

2 - To update synchronized directory (default ~/darkwiki) and receive new documents from the network:

% darkwiki update

NOTE: The synchronized directory path can be changed from the config file in ~/.config/darkfi/darkwiki.toml

3 - After add/edit a document in ~/darkwiki, the changes will be published by running update command:

% darkwiki update

4 - For restore files having local changes to the original text:

% darkwiki restore

5 - For both restore and update commands, the flag --dry-run can show the changes without applying/publishing the patches

% darkwiki update --dry-run

6 - Both restore and update commands are accepting passing the files names instead of updating/restoring all the documents in ~/darkwiki

% darkwiki update file1.md file2.md 

Dnetview

A simple tui to explore darkfi ircd network topology.

dnetview displays:

  1. all active nodes
  2. outgoing, incoming and manual sessions
  3. each associated connection and recent messages.

dnetview is based on the design-pattern Model, View, Controller. We create a logical separation between the underlying data structure or Model; the ui rendering aspect which is the View; and the Controller or game engine that makes everything run.

Install

% git clone https://github.com/darkrenaissance/darkfi 
% cd darkfi
% make BINS=dnetview

Usage

Run dnetview as follows:

dnetview -v

On first run, dnetview will create a config file in .config/darkfi. You must manually enter the RPC ports of the nodes you want to connect to and title them as you see fit.

Dnetview creates a logging file in /tmp/dnetview.log. To see json data and other debug info, tail the file like so:

tail -f /tmp/dnetview.log

Learn

This section contains learning resources related to DarkFi.

Research

DarkFi maintains a public resource of zero-knowledge and math research in the script/research directory of the repo.

It features simple sage implementations of zero-knowledge algorithms and math primitives, including but not limited to:

Zero-knowledge explainer

We start with this algorithm as an example:

def foo(w, a, b):
    if w:
        return a * b
    else:
        return a + b

ZK code consists of lines of constraints. It has no concept of branching conditionals or loops.

So our first task is to flatten (convert) the above code to a linear equation that can be evaluated in ZK.

Consider an interesting fact. For any value , then if and only if .

In our code above is a binary value. It's value is either or . We make use of this fact by the following:

  1. when
  2. when . If then the expression is .

So we can rewrite foo(w, a, b) as the mathematical function

We now can convert this expression to a constraint system.

ZK statements take the form of:

More succinctly as:

These statements are converted into polynomials of the form:

is the target polynomial and in our case will be . is the cofactor polynomial. The statement says that the polynomial has roots (is equal to zero) at the points when .

Earlier we wrote our mathematical statement which we will now convert to constraints.

Rearranging the equation, we note that:

Swapping and rearranging, our final statement becomes . Represented in ZK as:

The last line is a boolean constraint that is either or by enforcing that (re-arranged this is ).

LineL(x)R(x)O(x)
1
2
3

Because of how the polynomials are created during the setup phase, you must supply them with the correct variables that satisfy these constraints, so that (line 1), (line 2) and (line 3).

Each one of , and is supplied a list of (constant coefficient, variable value) pairs.

In bellman library, the constant is a fixed value of type Scalar. The variable is a type called Variable. These are the values fed into lc0 (the 'left' polynomial), lc1 (the 'right' polynomial), and lc2 (the 'out' polynomial).

In our example we had a function where for example . The verifier does not know the variables , and which are allocated by the prover as variables. However the verifier does know the coefficients (which are of the Scalar type) shown in the table above. In our example they only either or , but can also be other constant values.

pub struct LinearCombination<Scalar: PrimeField>(Vec<(Variable, Scalar)>);

It is important to note that each one of the left, right and out registers is simply a list of tuples of (constant coefficient, variable value).

When we wish to add a constant value, we use the variable called ~one (which is always the first automatically allocated variable in bellman at index 0). Therefore we end up adding our constant to the LinearCombination as (c, ~one).

Any other non-constant value, we wish to add to our constraint system must be allocated as a variable. Then the variable is added to the LinearCombination. So in our example, we will allocate , getting back Variable objects which we then add to the left lc, right lc or output lc.

Dchat: Writing a p2p app

This tutorial will teach you how to deploy an app on DarkFi's p2p network.

We will create a terminal-based p2p chat app called dchat that we run in two different instances: an inbound and outbound node called Alice and Bob. Alice takes a message from stdin and broadcasts it to the p2p network. When Bob receives the message on the p2p network it is displayed in his terminal.

Dchat will showcase some key concepts that you'll need to develop on the p2p network, in particular:

  • Understanding inbound, outbound and seed nodes.
  • Writing and registering a custom Protocol.
  • Creating and subscribing to a custom Message type.

The source code for this tutorial can be found at example/dchat.

Part 1: Deploying the network

We'll start by deploying a local version of the p2p network. This will introduce a number of key concepts:

  • p2p daemons
  • Inbound, outbound, manual and seed nodes
  • Understanding Sessions
  • p2p.start(), p2p.run() and p2p.stop()

Getting started

We'll create a new cargo directory and add DarkFi to our Cargo.toml, like so:

[dependencies]
darkfi = {path = "../../", features = ["net", "rpc"]}
darkfi-serial = {path = "../../src/serial"}

Be sure to replace the path to DarkFi with the correct path for your setup.

Once that's done we can access DarkFi's net methods inside of dchat. We'll need a few more external libraries too, so add these dependencies:

async-std = "1.12.0"
async-trait = "0.1.58"
easy-parallel = "3.2.0"
smol = "1.2.5"
num_cpus = "1.14.0"

log = "0.4.17"
simplelog = "0.12.0"
url = "2.3.1"

serde_json = "1.0.87"
serde = {version = "1.0.147", features = ["derive"]}
toml = "0.5.9"

Writing a daemon

DarkFi consists of many seperate daemons communicating with each other. To run the p2p network, we'll need to implement our own daemon. So we'll start building dchat by configuring our main function into a daemon that can run the p2p network.

use async_std::sync::{Arc, Mutex};
use easy_parallel::Parallel;
use smol::Executor;

#[async_std::main]
async fn main() -> Result<()> {
    let ex = Arc::new(Executor::new());
    let ex2 = ex.clone();

    let nthreads = num_cpus::get();
    let (signal, shutdown) = smol::channel::unbounded::<()>();

    let (_, result) = Parallel::new()
        .each(0..nthreads, |_| smol::future::block_on(ex2.run(shutdown.recv())))
        .finish(|| {
            smol::future::block_on(async move {
                drop(signal);
                Ok(())
            })
        });

    result

}

We get the number of cpu cores using num_cpus::get() and spin up a bunch of threads in parallel using easy_parallel. Right now it doesn't do anything, but soon we'll run dchat inside this block.

Note: DarkFi includes a macro called async_daemonize that is used by DarkFi binaries to minimize boilerplate in the repo. To keep things simple we will ignore this macro for the purpose of this tutorial. But check it out if you are curious: util/cli.rs.

Sessions

To deploy the p2p network, we need to configure two types of nodes: inbound and outbound. These nodes perform different roles on the p2p network. An inbound node receives connections. An outbound node makes connections.

The behavior of these nodes is defined in what is called a Session. There are four types of sessions: Manual, Inbound, Outbound and SeedSync.

There behavior is as follows:

Inbound: Uses an Acceptor to accept connections on the inbound connect address configured in settings.

Outbound: Starts a connect loop for every connect slot configured in settings. Establishes a connection using Connector.connect(): a method that takes an address returns a Channel.

Manual: Uses a Connector to connect to a single address that is passed to ManualSession::connect(). Used to create an explicit connection to a specified address.

SeedSync: Creates a connection to the seed nodes specified in settings. Loops through all the configured seeds and tries to connect to them using a Connector. Either connects successfully, fails with an error or times out.

Settings

To create an inbound and outbound node, we will need to configure them using net type called Settings. This type consists of several settings that allow you to configure nodes in different ways.

You would usually configure Settings using a config file or command line inputs. On dchat we are keeping things ultra simple. We pass a command line flag that is either a or b. If we pass a we will initialize the Settings for an inbound node. If we pass b we will initialize an outbound node.

Here's how that works. We define two methods called alice() and bob(). alice() returns the Settings that will create an inbound node. bob() return the Settings for an outbound node.

We also implement logging that outputs to /tmp/alice.log and /tmp/bob.log so we can access the debug output of our nodes. We store this info in a log file because we don't want it interfering with our terminal UI when we eventually build it.

This is a function that returns the settings to create Alice, an inbound node:

fn alice() -> Result<Settings> {
   let log_level = simplelog::LevelFilter::Debug;
   let log_config = simplelog::Config::default();

   let log_path = "/tmp/alice.log";
   let file = File::create(log_path).unwrap();
   WriteLogger::init(log_level, log_config, file)?;

   let seed = Url::parse("tcp://127.0.0.1:55555").unwrap();
   let inbound = Url::parse("tcp://127.0.0.1:55554").unwrap();
   let ext_addr = Url::parse("tcp://127.0.0.1:55554").unwrap();

   let settings = Settings {
       inbound: Some(inbound),
       external_addr: Some(ext_addr),
       seeds: vec![seed],
       ..Default::default()
   };

   Ok(settings)
}

This is a function that returns the settings to create Bob, an outbound node:

fn bob() -> Result<Settings> {
   let log_level = simplelog::LevelFilter::Debug;
   let log_config = simplelog::Config::default();

   let log_path = "/tmp/bob.log";
   let file = File::create(log_path).unwrap();
   WriteLogger::init(log_level, log_config, file)?;

   let seed = Url::parse("tcp://127.0.0.1:55555").unwrap();

   let settings = Settings {
       inbound: None,
       outbound_connections: 5,
       seeds: vec![seed],
       ..Default::default()
   };

   Ok(settings)
}

Both outbound and inbound nodes specify a seed address to connect to. The inbound node also specifies an external address and an inbound address: this is where it will receive connections. The outbound node specifies the number of outbound connection slots, which is the number of outbound connections the node will try to make.

These are the only settings we need to think about. For the rest, we use the network defaults.

Error handling

Before we continue, we need to quickly add some error handling to handle the case where a user forgets to add the command-line flag.

use std::{error, fmt};

#[derive(Debug, Clone)]
pub struct ErrorMissingSpecifier;

impl fmt::Display for ErrorMissingSpecifier {
    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
        write!(f, "missing node specifier. you must specify either a or b")
    }
}

impl error::Error for ErrorMissingSpecifier {}

We can then read the flag from the command-line by adding the following lines to main():

use crate::dchat_error::ErrorMissingSpecifier;
use darkfi::net::Settings;

pub type Error = Box<dyn error::Error>;
pub type Result<T> = std::result::Result<T, Error>;

async fn main() -> Result<()> {
    // ...
    let settings: Result<Settings> = match std::env::args().nth(1) {
        Some(id) => match id.as_str() {
            "a" => alice(),
            "b" => bob(),
            _ => Err(ErrorMissingSpecifier.into()),
        },
        None => Err(ErrorMissingSpecifier.into()),
    };
    // ...
}

Start-Run-Stop

Now that we have initialized the network settings we can create an instance of the p2p network.

Add the following to main():

    let p2p = net::P2p::new(settings?).await;

We will next create a Dchat struct that will store all the data required by dchat. For now, it will just hold a pointer to the p2p network.

struct Dchat {
    p2p: net::P2pPtr,
}

impl Dchat {
    fn new(p2p: net::P2pPtr) -> Self {
        Self { p2p }
    }
}

Now let's add a start() function to the Dchat implementation. start() takes an executor and runs three p2p methods, p2p::start(), p2p::run(), and p2p::stop().

    async fn start(&mut self, ex: Arc<Executor<'_>>) -> Result<()> {
        let ex2 = ex.clone();

        self.p2p.clone().start(ex.clone()).await?;
        ex2.spawn(self.p2p.clone().run(ex.clone())).detach();

        self.p2p.stop().await;

        Ok(())
    }

Let's take a quick look at the underlying p2p methods we're using here.

Start

This is start():

    pub async fn start(self: Arc<Self>, executor: Arc<Executor<'_>>) -> Result<()> {
        debug!(target: "net", "P2p::start() [BEGIN]");

        *self.state.lock().await = P2pState::Start;

        // Start seed session
        let seed = SeedSyncSession::new(Arc::downgrade(&self));
        // This will block until all seed queries have finished
        seed.start(executor.clone()).await?;

        *self.state.lock().await = P2pState::Started;

        debug!(target: "net", "P2p::start() [END]");
        Ok(())
    }

start() changes the P2pState to P2pState::Start and runs a seed session.

This loops through the seed addresses specified in our Settings and tries to connect to them. The seed session either connects successfully, fails with an error or times out.

If a seed node connects successfully, it runs a version exchange protocol, stores the channel in the p2p list of channels, and disconnects, removing the channel from the channel list.

Run

This is run():

    pub async fn run(self: Arc<Self>, executor: Arc<Executor<'_>>) -> Result<()> {
        debug!(target: "net", "P2p::run() [BEGIN]");

        *self.state.lock().await = P2pState::Run;

        let manual = self.session_manual().await;
        for peer in &self.settings.peers {
            manual.clone().connect(peer, executor.clone()).await;
        }

        let inbound = self.session_inbound().await;
        inbound.clone().start(executor.clone()).await?;

        let outbound = self.session_outbound().await;
        outbound.clone().start(executor.clone()).await?;

        let stop_sub = self.subscribe_stop().await;
        // Wait for stop signal
        stop_sub.receive().await;

        // Stop the sessions
        manual.stop().await;
        inbound.stop().await;
        outbound.stop().await;

        debug!(target: "net", "P2p::run() [END]");
        Ok(())
    }

run() changes the P2pState to P2pState::Run. It then calls start() on manual, inbound and outbound sessions that are contained with the P2p struct. The outcome of start() will depend on how your node is configured. start() will try to run each kind of session, but if the configuration doesn't match attemping to start a session will simply return without doing anything. For example, if you are an outbound node, inbound.start() will return with the following message:

info!(target: "net", "Not configured for accepting incoming connections.");

run() then waits for a stop signal and shuts down the sessions when it is received.

Stop

This is stop().

        pub async fn stop(&self) {
        self.stop_subscriber.notify(()).await
    }

stop() transmits a shutdown signal to all channels subscribed to the stop signal and safely shuts down the network.

The seed node

Let's create an instance of dchat inside our main function and pass the p2p network into it. Then we'll add dchat::start() to our async loop in the main function.

#[async_std::main]
async fn main() -> Result<()> {
    let settings: Result<Settings> = match std::env::args().nth(1) {
        Some(id) => match id.as_str() {
            "a" => alice(),
            "b" => bob(),
            _ => Err(MissingSpecifier.into()),
        },
        None => Err(MissingSpecifier.into()),
    };

    let p2p = net::P2p::new(settings?.into()).await;

    let dchat = Dchat::new(p2p);

    let nthreads = num_cpus::get();
    let (signal, shutdown) = async_channel::unbounded::<()>();

    let ex = Arc::new(Executor::new());
    let ex2 = ex.clone();

    let (_, result) = Parallel::new()
        .each(0..nthreads, |_| {
            smol::future::block_on(ex.run(shutdown.recv()))
        })
        .finish(|| {
            smol::future::block_on(async move {
                dchat.start(ex2).await?;
                drop(signal);
                Ok(())
            })
        });

    result
}

Now try to run the program, don't forget to add a specifier a or b to define the type of node.

It should output the following error:

Error: NetworkOperationFailed

That's because there is no seed node online for our nodes to connect to. A seed node is used when connecting to the network: it is a special kind of inbound node that gets connected to, sends over a list of addresses and disconnects again. This behavior is defined in the ProtocolSeed.

Everytime we run p2p.start() we attempt to connect to a seed using a SeedSyncSession. If the SeedSyncSession fails, p2p.start() will fail, so without a seed node, our inbound and outbound nodes cannot establish a connection to the network. Let's remedy that.

We have two options here. First, we could implement our own seed node. Alternatively, DarkFi maintains a master seed node called lilith that can act as the seed for many different protocols at the same time. For the purpose of this tutorial let's use lilith.

What lilith does in the background is very simple. Just like any p2p daemon, a seed node defines its networks settings into a type called Settings and creates a new instance of the p2p network. It then runs p2p::start() and p2p::run(). The difference is in the settings: a seed node just specifies an inbound address which other nodes will connect to.

Crucially, this inbound address must match the seed address we specified earlier in Alice and Bob's settings.

Deploying a local network

Get ready to spin up a bunch of different terminals. We are going to run 3 nodes: Alice and Bob and our seed node. To run the seed node, go to the lilith directory and spawn a new config file by running it once:

cd darkfi
make BINS=lilith
./lilith

You should see the following output:

Config file created in '"/home/USER/.config/darkfi/lilith_config.toml"'. Please review it and try again.

Add dchat to the config as follows, keeping in mind that the port number must match the seed we specified earlier in Alice and Bob's settings.

[network."dchat"]
port = 50515
localnet = true

Now run lilith:

./lilith

Here's what the debug output should look like:

[INFO] Found configuration for network: dchat
[INFO] Starting seed network node for dchat at: tcp://127.0.0.1:50515
[WARN] Skipping seed sync process since no seeds are configured.
[INFO] Starting inbound session on tcp://127.0.0.1:50515
[INFO] Starting 0 outbound connection slots.

Next we'll head back to dchat and run Alice.

cargo run a

You can cat or tail the log file created in /tmp/. I recommend using multitail for colored debug output, like so:

multitail -c /tmp/alice.log

Check out that debug output! Keep an eye out for this line:

[INFO] Connected seed #0 [tcp://127.0.0.1:55555]

That shows Alice has connected to the seed node. Here's some more interesting output:

[DEBUG] (1) net: Attached ProtocolPing
[DEBUG] (1) net: Attached ProtocolSeed
[DEBUG] (1) net: ProtocolVersion::run() [START]
[DEBUG] (1) net: ProtocolVersion::exchange_versions() [START]

This raises an interesting question- what are these protocols? We'll deal with that in more detail in a subsequent section. For now it's worth noting that every node on the p2p network performs several protocols when it connects to another node.

Keep Alice and the seed node running. Now let's run Bob.

cargo run b

And track his debug output:

multitail -c /tmp/bob.log

Success! All going well, Alice and Bob are now connected to each other. We should be able to watch ping and pong messages being sent across by tracking their debug output.

We have created a local deployment of the p2p network.

Part 2: Creating dchat

Now that we've deployed a local version of the p2p network, we can start creating a custom protocol and message types that dchat will use to send and receive messages across the network.

This section will cover:

  • The Message type
  • Protocols and the ProtocolRegistry
  • The MessageSubsystem
  • MessageSubscription
  • Channel

Creating a Message type

We'll start by creating a custom Message type called DchatMsg. This is the data structure that we'll use to send messages between dchat instances.

Messages on the p2p network must implement the Message trait. Message is a generic type that standardizes all messages on DarkFi's p2p network.

We define a custom type called DchatMsg that implements the Message trait. We also add darkfi::util::SerialEncodable and darkfi::util::SerialDecodable macros to our struct definition so our messages can be parsed by the network.

Message requires that we implement a method called name(), which returns a str of the struct's name.

For the purposes of our chat program, we will also define a buffer where we can write messages upon receiving them on the p2p network. We'll wrap this in a Mutex to ensure thread safety and an Arc pointer so we can pass it around.

use async_std::sync::{Arc, Mutex};

use darkfi::net;
use darkfi_serial::{SerialDecodable, SerialEncodable};

pub type DchatMsgsBuffer = Arc<Mutex<Vec<DchatMsg>>>;

impl net::Message for DchatMsg {
    fn name() -> &'static str {
        "DchatMsg"
    }
}

#[derive(Debug, Clone, SerialEncodable, SerialDecodable)]
pub struct DchatMsg {
    pub msg: String,
}

Understanding protocols

We now need to implement a custom protocol which defines how our chat program interacts with the p2p network.

We've already interacted with several protocols already. Protocols are automatically activated when nodes connect to eachother on the p2p network. Here are examples of two protocols that every node runs continuously in the background:

Under the hood, these protocols have a few similarities:

This introduces several generic interfaces that we must use to build our custom protocol. In particular:

The Message Subsystem

MessageSubsystem is a generic publish/subscribe class that contains a list of Message dispatchers. A new dispatcher is created for every Message type. These Message specific dispatchers maintain a list of susbscribers that are subscribed to a particular Message.

Message Subscription

A subscription to a specific Message type. Handles receiving messages on a subscription.

Channel

Channel is an async connection for communication between nodes. It is also a powerful interface that exposes methods to the MessageSubsystem and implements MessageSubscription.

The Protocol Registry

ProtocolRegistry is a registry of all protocols. We use it through the method register() which passes a protocol constructor and a session bitflag. The bitflag specifies which sessions the protocol is created for. The ProtocolRegistry then spawns new protocols for different channels depending on the session.

ProtocolJobsManager

An asynchronous job manager that spawns and stops tasks. Its main purpose is so a protocol can cleanly close all started jobs, through the function close_all_tasks(). This way if the connection between nodes is dropped and the channel closes, all protocols are also shutdown.

ProtocolBase

A generic protocol trait that all protocols must implement.

ProtocolDchat

Let's start tying these concepts together. We'll define a struct called ProtocolDchat that contains a MessageSubscription to DchatMsg and a pointer to the ProtocolJobsManager. We'll also include the DchatMsgsBuffer in the struct as it will come in handy later on.

use async_std::sync::Arc;
use async_trait::async_trait;
use darkfi::{net, Result};
use log::debug;
use smol::Executor;

use crate::dchatmsg::{DchatMsg, DchatMsgsBuffer};

pub struct ProtocolDchat {
    jobsman: net::ProtocolJobsManagerPtr,
    msg_sub: net::MessageSubscription<DchatMsg>,
    msgs: DchatMsgsBuffer,
}

Next we'll implement the trait ProtocolBase. ProtocolBase requires two functions, start() and name(). In start() we will start up the ProtocolJobsManager. name() will return a str of the protocol name.

#[async_trait]
impl net::ProtocolBase for ProtocolDchat {
    async fn start(self: Arc<Self>, executor: Arc<Executor<'_>>) -> Result<()> {
        self.jobsman.clone().start(executor.clone());
        Ok(())
    }

    fn name(&self) -> &'static str {
        "ProtocolDchat"
    }
}

Once that's done, we'll need to create a ProtocolDchat constructor that we will pass to the ProtocolRegistry to register our protocol. We'll invoke the MessageSubsystem and add DchatMsg as to the list of dispatchers. Next, we'll create a MessageSubscription to DchatMsg using the method subscribe_msg().

We'll also initialize the ProtocolJobsManager and finally return a pointer to the protocol.

impl ProtocolDchat {
    pub async fn init(channel: net::ChannelPtr, msgs: DchatMsgsBuffer) -> net::ProtocolBasePtr {
        debug!(target: "dchat", "ProtocolDchat::init() [START]");
        let message_subsytem = channel.get_message_subsystem();
        message_subsytem.add_dispatch::<DchatMsg>().await;

        let msg_sub =
            channel.subscribe_msg::<DchatMsg>().await.expect("Missing DchatMsg dispatcher!");

        Arc::new(Self {
            jobsman: net::ProtocolJobsManager::new("ProtocolDchat", channel.clone()),
            msg_sub,
            msgs,
        })
    }
}

We're nearly there. But right now the protocol doesn't actually do anything. Let's write a method called handle_receive_msg() which receives a message on our MessageSubscription and adds it to DchatMsgsBuffer.

Put this inside the ProtocolDchat implementation:

    async fn handle_receive_msg(self: Arc<Self>) -> Result<()> {
        debug!(target: "dchat", "ProtocolDchat::handle_receive_msg() [START]");
        while let Ok(msg) = self.msg_sub.receive().await {
            let msg = (*msg).to_owned();
            self.msgs.lock().await.push(msg);
        }

        Ok(())
    }

As a final step, let's add that task to the ProtocolJobManager that is invoked in start():

    async fn start(self: Arc<Self>, executor: Arc<Executor<'_>>) -> Result<()> {
        debug!(target: "dchat", "ProtocolDchat::ProtocolBase::start() [START]");
        self.jobsman.clone().start(executor.clone());
        self.jobsman.clone().spawn(self.clone().handle_receive_msg(), executor.clone()).await;
        debug!(target: "dchat", "ProtocolDchat::ProtocolBase::start() [STOP]");
        Ok(())
    }

Registering a protocol

We've now successfully created a custom protocol. The next step is the register the protocol with the ProtocolRegistry.

We'll define a new function inside the Dchat implementation called register_protocol(). It will invoke the ProtocolRegistry using the handle to the p2p network contained in the Dchat struct. It will then call register() on the registry and pass the ProtocolDchat constructor.

    async fn register_protocol(&self, msgs: DchatMsgsBuffer) -> Result<()> {
        debug!(target: "dchat", "Dchat::register_protocol() [START]");
        let registry = self.p2p.protocol_registry();
        registry
            .register(!net::SESSION_SEED, move |channel, _p2p| {
                let msgs2 = msgs.clone();
                async move { ProtocolDchat::init(channel, msgs2).await }
            })
            .await;
        debug!(target: "dchat", "Dchat::register_protocol() [STOP]");
        Ok(())
    }

There's a lot going on here. register() takes a closure with two arguments, channel and p2p. We use move to capture these values. We then create an async closure that captures these values and the value msgs and use them to call ProtocolDchat::init() in the async block.

The code would be expressed more simply as:

registry.register(!net::SESSION_SEED, async move |channel, _p2p| {
        ProtocolDchat::init(channel, msgs).await
    })
    .await;

However we cannot do this due to limitation with async closures. So instead we wrap the async move in a move in order to capture the variables needed by ProtocolDchat::init().

Notice the use of a bitflag. We use !SESSION_SEED to specify that this protocol should be performed by all sessions aside from the seed session.

Also notice that register_protocol() requires a DchatMsgsBuffer that we send to the ProtocolDchat constructor. We'll create the DchatMsgsBuffer in main() and pass it to Dchat::new(). Let's add DchatMsgsBuffer to the Dchat struct definition first.

struct Dchat {
    p2p: net::P2pPtr,
    recv_msgs: DchatMsgsBuffer,
}

And initialize it:

#[async_std::main]
async fn main() -> Result<()> {
    //...

    let msgs: DchatMsgsBuffer = Arc::new(Mutex::new(vec![DchatMsg { msg: String::new() }]));

    let mut dchat = Dchat::new(p2p.clone(), msgs);

    //...
    let (_, result) = Parallel::new()
        .each(0..nthreads, |_| smol::future::block_on(ex2.run(shutdown.recv())))
        .finish(|| {
            smol::future::block_on(async move {
                dchat.start(ex3).await?;
                drop(signal);
                Ok(())
            })
        });

    result
}

Finally, call register_protocol() in dchat::start():

    async fn start(&mut self, ex: Arc<Executor<'_>>) -> Result<()> {
        let ex2 = ex.clone();

        self.register_protocol(self.recv_msgs.clone()).await?;
        self.p2p.clone().start(ex.clone()).await?;
        ex2.spawn(self.p2p.clone().run(ex.clone())).detach();

        self.p2p.stop().await;

        Ok(())
    }

Now try running Alice and Bob and seeing what debug output you get. Keep an eye out for the following:

[DEBUG] (1) net: Channel::subscribe_msg() [START, command="DchatMsg", address=tcp://127.0.0.1:55555]
[DEBUG] (1) net: Channel::subscribe_msg() [END, command="DchatMsg", address=tcp://127.0.0.1:55555]
[DEBUG] (1) net: Attached ProtocolDchat

If you see that, we have successfully:

  • Implemented a custom Message and created a MessageSubscription.
  • Implemented a custom Protocol and registered it with the ProtocolRegistry.

Sending messages

The core of our application has been built. All that's left is to add a UI that takes user input, creates a DchatMsg and sends it over the network.

Let's start by creating a send() function inside Dchat. This will introduce us to a new p2p method that is essential to our chat app: p2p.broadcast().

    async fn send(&self, msg: String) -> Result<()> {
        let dchatmsg = DchatMsg { msg };
        self.p2p.broadcast(dchatmsg).await?;
        Ok(())
    }

We pass a String called msg that will be taken from user input. We use this input to initialize a message of the type DchatMsg that the network can now support. Finally, we pass the message into p2p.broadcast().

Here's what happens under the hood:

    pub async fn broadcast<M: Message + Clone>(&self, message: M) -> Result<()> {
        let chans = self.channels.lock().await;
        let iter = chans.values();
        let mut futures = FuturesUnordered::new();

        for channel in iter {
            futures.push(channel.send(message.clone()).map_err(|e| {
                format!(
                    "P2P::broadcast: Broadcasting message to {} failed: {}",
                    channel.address(),
                    e
                )
            }));
        }

        if futures.is_empty() {
            error!("P2P::broadcast: No connected channels found");
            return Ok(())
        }

        while let Some(entry) = futures.next().await {
            if let Err(e) = entry {
                error!("{}", e);
            }
        }

        Ok(())
    }

This is pretty straightforward: broadcast() takes a generic Message type and sends it across all the channels that our node has access to.

All that's left to do is to create a UI.

Slap on a UI

We'll create a new method called menu() inside the Dchat implementation. It implements a highly simple UI that allows a user to send messages and see received messages inside the inbox. Our inbox simply displays the messages that ProtocolDchat has saved in the DchatMsgBuffer.

Here's what it should look like:

    async fn menu(&self) -> Result<()> {
        let mut buffer = String::new();
        let stdin = stdin();
        loop {
            println!(
                "Welcome to dchat.
    s: send message
    i: inbox
    q: quit "
            );
            stdin.read_line(&mut buffer)?;
            // Remove trailing \n
            buffer.pop();
            match buffer.as_str() {
                "q" => return Ok(()),
                "s" => {
                    // Remove trailing s
                    buffer.pop();
                    stdin.read_line(&mut buffer)?;
                    match self.send(buffer.clone()).await {
                        Ok(_) => {
                            println!("you sent: {}", buffer);
                        }
                        Err(e) => {
                            println!("send failed for reason: {}", e);
                        }
                    }
                    buffer.clear();
                }
                "i" => {
                    let msgs = self.recv_msgs.lock().await;
                    if msgs.is_empty() {
                        println!("inbox is empty")
                    } else {
                        println!("received:");
                        for i in msgs.iter() {
                            if !i.msg.is_empty() {
                                println!("{}", i.msg);
                            }
                        }
                    }
                    buffer.clear();
                }
                _ => {}
            }
        }
    }

We'll call menu() inside of dchat::start() along with our other methods, like so:

    async fn start(&mut self, ex: Arc<Executor<'_>>) -> Result<()> {
        let ex2 = ex.clone();

        self.register_protocol(self.recv_msgs.clone()).await?;
        self.p2p.clone().start(ex.clone()).await?;
        self.p2p.clone().run(ex.clone()).await?;

        self.menu().await?;

        self.p2p.stop().await;
        Ok(())
    }

But wait- if you try running this code, you'll notice that the menu never gets displayed. That's because we call .await on the previous function call, p2p.run(). p2p.run() is a loop that runs until we exit the program, so in order for it to not block other threads from executing we'll need to detach it in the background.

The complete implementaion looks like this:

    async fn start(&mut self, ex: Arc<Executor<'_>>) -> Result<()> {
        debug!(target: "dchat", "Dchat::start() [START]");

        let ex2 = ex.clone();

        self.register_protocol(self.recv_msgs.clone()).await?;
        self.p2p.clone().start(ex.clone()).await?;
        ex2.spawn(self.p2p.clone().run(ex.clone())).detach();

        self.menu().await?;

        self.p2p.stop().await;

        debug!(target: "dchat", "Dchat::start() [STOP]");
        Ok(())
    }

Using dchat

We are finally ready to test our program. Spin up 5 different terminals.

In terminal 1, run lilith.

./lilith

In terminal 2, run Alice.

cargo run a 

In terminal 3, run Bob.

cargo run b

In terminal 4, display Alice's debug output.

multitail -c /tmp/alice.log

In terminal 5, display Bob's debug output.

multitail -c /tmp/bob.log

Now use the UI to send messages between Alice and Bob. We have successfully implemented a p2p chat program.

Network tools

In its current state, dchat is ready to use. But there's steps we can take to improve it. If we connect dchat to JSON-RPC, we gain access to a tool called dnetview that allows us to visually explore connections and messages on the p2p network.

As well as facilitating debugging, connecting dnetview is a good excuse to dive into DarkFi's rpc module which is essential to the DarkFi code base.

This section will cover:

  • DarkFi's JSON-RPC interface
  • Exploring the p2p network topology using dnetview

RPC interface

Let's begin connecting dchat up to JSON-RPC using DarkFi's rpc module.

We'll start by defining a new struct called JsonRpcInterface that takes two values, an accept Url that will receive JSON-RPC requests, and a pointer to the p2p network.

pub struct JsonRpcInterface {
    pub addr: Url,
    pub p2p: net::P2pPtr,
}

We'll need to implement a trait called RequestHandler for the JsonRpcInterface. RequestHandler exposes a method called handle_request() which is a handle for processing incoming JSON-RPC requests. handle_request() takes a JsonRequest and returns a JsonResult. These types are defined inside jsonrpc.rs

This is JsonResult:

#[derive(Clone, Debug, Serialize, Deserialize)]
#[serde(untagged)]
pub enum JsonResult {
    Response(JsonResponse),
    Error(JsonError),
    Notification(JsonNotification),
    Subscriber(JsonSubscriber),
}

This is JsonRequest:

#[derive(Clone, Debug, Serialize, Deserialize)]
pub struct JsonRequest {
    /// JSON-RPC version
    pub jsonrpc: Value,
    /// Request ID
    pub id: Value,
    /// Request method
    pub method: Value,
    /// Request parameters
    pub params: Value,
}

We'll use handle_request() to run a match statement on JsonRequest.method.

Running a match on method will allow us to branch out to functions that respond to methods received over JSON-RPC. We haven't implemented any methods yet, so for now let's just return a JsonError.

#[async_trait]
impl RequestHandler for JsonRpcInterface {
    async fn handle_request(&self, req: JsonRequest) -> JsonResult {
        if req.params.as_array().is_none() {
            return JsonError::new(ErrorCode::InvalidRequest, None, req.id).into()
        }

        debug!(target: "RPC", "--> {}", serde_json::to_string(&req).unwrap());

        match req.method.as_str() {
            Some(_) | None => JsonError::new(ErrorCode::MethodNotFound, None, req.id).into(),
        }
    }
}

Accept addr

To deploy the JsonRpcInterface and start receiving JSON-RPC requests, we'll need to configure a JSON-RPC accept address.

Let's return to our functions alice() and bob(). To enable Alice and Bob to connect to JSON-RPC, we'll need to generalize this return a RPC Url as well as a Settings.

Let's define a new struct called AppSettings that has two fields, Url and Settings.

#[derive(Clone, Debug)]
struct AppSettings {
    accept_addr: Url,
    net: Settings,
}

impl AppSettings {
    pub fn new(accept_addr: Url, net: Settings) -> Self {
        Self { accept_addr, net }
    }
}

Next, we'll change our alice() method to return a AppSettings instead of a Settings.

fn alice() -> Result<AppSettings> {
    let log_level = simplelog::LevelFilter::Debug;
    let log_config = simplelog::Config::default();

    let log_path = "/tmp/alice.log";
    let file = File::create(log_path).unwrap();
    WriteLogger::init(log_level, log_config, file)?;

    let seed = Url::parse("tcp://127.0.0.1:50515").unwrap();
    let inbound = Url::parse("tcp://127.0.0.1:51554").unwrap();
    let ext_addr = Url::parse("tcp://127.0.0.1:51554").unwrap();

    let net = Settings {
        inbound: vec![inbound],
        external_addr: vec![ext_addr],
        seeds: vec![seed],
        localnet: true,
        ..Default::default()
    };

    let accept_addr = Url::parse("tcp://127.0.0.1:55054").unwrap();
    let settings = AppSettings::new(accept_addr, net);

    Ok(settings)
}

And the same for bob():

fn bob() -> Result<AppSettings> {
    let log_level = simplelog::LevelFilter::Debug;
    let log_config = simplelog::Config::default();

    let log_path = "/tmp/bob.log";
    let file = File::create(log_path).unwrap();
    WriteLogger::init(log_level, log_config, file)?;

    let seed = Url::parse("tcp://127.0.0.1:50515").unwrap();

    let net = Settings {
        inbound: vec![],
        outbound_connections: 5,
        seeds: vec![seed],
        localnet: true,
        ..Default::default()
    };

    let accept_addr = Url::parse("tcp://127.0.0.1:51054").unwrap();
    let settings = AppSettings::new(accept_addr, net);

    Ok(settings)
}

Update main() with the new type:

#[async_std::main]
async fn main() -> Result<()> {
    let settings: Result<AppSettings> = match std::env::args().nth(1) {
        Some(id) => match id.as_str() {
            "a" => alice(),
            "b" => bob(),
            _ => Err(ErrorMissingSpecifier.into()),
        },
        None => Err(ErrorMissingSpecifier.into()),
    };

    let settings = settings?.clone();

    let p2p = net::P2p::new(settings.net).await;
    //...
    }
}

Methods

We're ready to deploy our JsonRpcInterface. But right now now it just returns JsonError::MethodNotFound. So before testing out the JSON-RPC, let's implement some methods.

We'll start with a simple pong method that replies to ping.

    async fn pong(&self, id: Value, _params: Value) -> JsonResult {
        JsonResponse::new(json!("pong"), id).into()
    }

And add it to handle_request():

        match req.method.as_str() {
            Some("ping") => self.pong(req.id, req.params).await,
            Some(_) | None => JsonError::new(ErrorCode::MethodNotFound, None, req.id).into(),
            }

RPC server

To deploy the JsonRpcInterface, we'll need to create an RPC server using listen_and_serve(). listen_and_serve() is a method defined in DarkFi's rpc module. It starts a JSON-RPC server that is bound to the provided accept URL and uses our previously implemented RequestHandler to handle incoming requests.

Add the following lines to main():

    let accept_addr = settings.accept_addr.clone();
    let rpc = Arc::new(JsonRpcInterface { addr: accept_addr.clone(), p2p });
    let _ex = ex.clone();
    ex.spawn(async move { listen_and_serve(accept_addr.clone(), rpc, _ex).await }).detach();

We create a new JsonRpcInterface inside an Arc pointer and pass in our accept_addr and p2p object.

Next, we create an async block that calls listen_and_serve(). The async block uses the move keyword to takes ownership of the accept_addr and JsonRpcInterface values and pass them into listen_and_serve(). We use an executor to spawn listen_and_serve() as a new thread and detach it in the background.

We have enabled JSON-RPC.

Here's what our complete main() function looks like:

#[async_std::main]
async fn main() -> Result<()> {
    let settings: Result<AppSettings> = match std::env::args().nth(1) {
        Some(id) => match id.as_str() {
            "a" => alice(),
            "b" => bob(),
            _ => Err(ErrorMissingSpecifier.into()),
        },
        None => Err(ErrorMissingSpecifier.into()),
    };

    let settings = settings?.clone();

    let p2p = net::P2p::new(settings.net).await;

    let ex = Arc::new(Executor::new());
    let ex2 = ex.clone();
    let ex3 = ex2.clone();

    let msgs: DchatMsgsBuffer = Arc::new(Mutex::new(vec![DchatMsg { msg: String::new() }]));

    let mut dchat = Dchat::new(p2p.clone(), msgs);

    let accept_addr = settings.accept_addr.clone();
    let rpc = Arc::new(JsonRpcInterface { addr: accept_addr.clone(), p2p });
    let _ex = ex.clone();
    ex.spawn(async move { listen_and_serve(accept_addr.clone(), rpc, _ex).await }).detach();

    let nthreads = num_cpus::get();
    let (signal, shutdown) = smol::channel::unbounded::<()>();

    let (_, result) = Parallel::new()
        .each(0..nthreads, |_| smol::future::block_on(ex2.run(shutdown.recv())))
        .finish(|| {
            smol::future::block_on(async move {
                dchat.start(ex3).await?;
                drop(signal);
                Ok(())
            })
        });

    result
}

get_info

If you run Alice now, you'll see the following output:

[DEBUG] jsonrpc-server: Trying to bind listener on tcp://127.0.0.1:55054

That indicates that our JSON-RPC server is up and running. However, there's currently no client for us to connect to. That's where dnetview comes in. dnetview implements a JSON-RPC client that calls a single method: get_info().

To use it, let's return to our JsonRpcInterface and add the following method:

    async fn get_info(&self, id: Value, _params: Value) -> JsonResult {
        let resp = self.p2p.get_info().await;
        JsonResponse::new(resp, id).into()
    }

And add it to handle_request():

        match req.method.as_str() {
            Some("ping") => self.pong(req.id, req.params).await,
            Some("get_info") => self.get_info(req.id, req.params).await,
            Some(_) | None => JsonError::new(ErrorCode::MethodNotFound, None, req.id).into(),
        }

This calls the p2p function get_info() and passes the returned data into a JsonResponse.

Under the hood, this function triggers a hierarchy of get_info() calls which deliver info specific to a node, its inbound or outbound Session's, and the Channel's those Session's run.

Here's what happens:

    pub async fn get_info(&self) -> serde_json::Value {
        // Building ext_addr_vec string
        let mut ext_addr_vec = vec![];
        for ext_addr in &self.settings.external_addr {
            ext_addr_vec.push(ext_addr.as_ref().to_string());
        }

        json!({
            "external_addr": format!("{:?}", ext_addr_vec),
            "session_manual": self.session_manual().await.get_info().await,
            "session_inbound": self.session_inbound().await.get_info().await,
            "session_outbound": self.session_outbound().await.get_info().await,
            "state": self.state.lock().await.to_string(),
        })
    }

Here we return two pieces of info that are unique to a node: external_addr and state. We couple that data with SessionInfo by calling get_info() on each Session.

Session::get_info() returns data related to a Session (for example, an Inbound accept_addr in the case of an inbound Session). Session::get_info() then calls the function Channel::get_info() which returns data specific to a Channel. This happens via a child struct called ChannelInfo.

This is ChannelInfo::get_info().

    async fn get_info(&self) -> serde_json::Value {
        let log = match &self.log {
            Some(l) => {
                let mut lock = l.lock().await;
                let ret = lock.clone();
                *lock = Vec::new();
                ret
            }
            None => vec![],
        };

        json!({
            "random_id": self.random_id,
            "remote_node_id": self.remote_node_id,
            "last_msg": self.last_msg,
            "last_status": self.last_status,
            "log": log,
        })
    }

dnetview uses the info returned from Channel and Session and node-specific info like external_addr to display an overview of the p2p network.

Using dnetview

Finally, we're ready to use dnetview. Go to the dnetview directory and spawn a new config file by running it once:

cd darkfi
make BINS=dnetview
./dnetview

You should see the following output:

Config file created in '"/home/USER/.config/darkfi/dnetview_config.toml"'. Please review it and try again.

Edit the config file to include the JSON-RPC accept addresses for Alice and Bob:

[[nodes]]
name = "alice"
rpc_url="tcp://127.0.0.1:55054"

[[nodes]]
name = "bob"
rpc_url="tcp://127.0.0.1:51054"

Now run dnetview:

./dnetview

This is what you should see:

We haven't ran Alice and Bob yet, so dnetview can't connect to them. So let's run Alice and Bob.

cargo run a
cargo run b

Now try running dnetview again.

That's fun. Use j and k to navigate. See what happens when you select a Channel.

On each Channel, we see a log of messages being sent across the network. What happens when we send a message?

This is Bob receiving a DchatMsg message on the Channel tcp://127.0.0.1:51554. Pretty cool.

Debugging

As a final step, let's quickly turn to the debug output of dnetview which is stored in /tmp/dnetview.log.

Run dnetview in verbose mode to enable debugging.

./dnetview -v

Here's an example output. This is Alice:

[DEBUG] (16) jsonrpc-client: <-- {"jsonrpc":"2.0","id":8105306807249776489,"result":{"external_addr":"tcp://127.0.0.1:51554","session_inbound":{"connected":{"tcp://127.0.0.1:36428":[{"accept_addr":"tcp://127.0.0.1:51554"},{"last_msg":"addr","last_status":"recv","log":[[1659950874808537094,"send","version"],[1659950874810919251,"recv","version"],[1659950874811104471,"send","verack"],[1659950874811491950,"recv","verack"],[1659950874812397628,"send","getaddr"],[1659950874814847748,"recv","getaddr"],[1659950874815100189,"send","addr"],[1659950874816306644,"recv","addr"]],"random_id":2658393884,"remote_node_id":""}]}},"session_manual":{"key":110},"session_outbound":{"slots":[]},"state":"run"}}

This is Bob:

[DEBUG] (16) jsonrpc-client: <-- {"jsonrpc":"2.0","id":17000304364801751931,"result":{"external_addr":null,"session_inbound":{"connected":{}},"session_manual":{"key":110},"session_outbound":{"slots":[{"addr":null,"channel":null,"state":"open"},{"addr":null,"channel":null,"state":"open"},{"addr":"tcp://127.0.0.1:51554","channel":{"last_msg":"addr","last_status":"sent","log":[],"random_id":3924275147,"remote_node_id":""},"state":"connected"},{"addr":null,"channel":null,"state":"open"},{"addr":"tcp://127.0.0.1:50515","channel":{"last_msg":"addr","last_status":"sent","log":[],"random_id":2182348290,"remote_node_id":""},"state":"connected"}]},"state":"run"}}

The raw data might come in useful in some cases.

Happy hacking!