Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VectorClock!CausalOrder does not tolerate message loss #102

Open
lemmy opened this issue Apr 9, 2024 · 0 comments
Open

VectorClock!CausalOrder does not tolerate message loss #102

lemmy opened this issue Apr 9, 2024 · 0 comments
Labels
enhancement New feature or request

Comments

@lemmy
Copy link
Member

lemmy commented Apr 9, 2024

Error: TLC threw an unexpected exception.
This was probably caused by an error in the spec or model.
See the User Output or TLC Console for clues to what happened.
The exception was a java.lang.RuntimeException
: Attempted to apply the operator overridden by the Java method
public static tlc2.value.impl.Value tlc2.overrides.VectorClocks.causalOrder(tlc2.value.impl.TupleValue,tlc2.value.impl.OpValue,tlc2.value.impl.OpValue,tlc2.value.impl.OpValue),
but it produced the following error:
Index: 16, Size: 16

Support message loss, i.e., gaps in the otherwise monotonically increasing vector clock values.

CausalOrder(log, clock(_), node(_), domain(_)) ==
(*
Sort the provided log by the vector clock values indicated on each line
of the log. This operator cannot accommodate "hidden" events, meaning
events that are excluded from the log. The vector clocks must be
continuous without any gaps.
The predicates clock, node, and domain equals the vector clock from
a log entry, the node's clock value, and the clock's domain, i.e., the
nodes for which the clock has values.
Imagine a log containing lines such as:
[pkt |->
[vc |->
[1 |-> 20,
0 |-> 10,
3 |-> 16,
7 |-> 21,
4 |-> 10,
6 |-> 21]],
node |-> 5,
...
]
CausalOrder(log,
LAMBDA line: line.pkt.vc,
LAMBDA line: line.node,
LAMBDA vc : DOMAIN vc)
*)

public static Value causalOrder(final TupleValue v, final OpValue opClock, final OpValue opNode,
final OpValue opDomain) {
// A1) Sort each node's individual log which can be totally ordered.
final Map<Value, LinkedList<GraphNode>> n2l = new HashMap<>();
for (int j = 0; j < v.elems.length; j++) {
final Value val = v.elems[j];
final Value nodeId = opNode.eval(new Value[] { val }, EvalControl.Clear);
final Value vc = opClock.eval(new Value[] { val }, EvalControl.Clear);
final Value nodeTime = vc.select(new Value[] { nodeId });
final Enumerable dom = (Enumerable) opDomain.eval(new Value[] { vc }, EvalControl.Clear).toSetEnum();
final LinkedList<GraphNode> list = n2l.computeIfAbsent(nodeId, k -> new LinkedList<GraphNode>());
list.add(new GraphNode(vc, nodeTime, dom, val));
}
// A2) Totally order each node's vector clocks in the log! They are likely
// already be ordered, because a single process is unlike to have its log
// reordered.
n2l.values().forEach(list -> list.sort(new Comparator<GraphNode>() {
@Override
public int compare(GraphNode o1, GraphNode o2) {
// It suffices the compare the node's vector clock value because a node's vector
// clock value is incremented on every receive: "Each time a process receives a
// message, it increments its own logical clock in the vector by one and ...'
return o1.getTime().compareTo(o2.getTime());
}
}));
// ----------------------------------------------------------------------- //
// B) Merge the totally ordered logs into a directed acyclic graph (DAG).
for (Value host : n2l.keySet()) {
final LinkedList<GraphNode> list = n2l.get(host);
// Initialize the global vector clock.
final Map<Value, Value> globalClock = new HashMap<>();
n2l.keySet().forEach(h -> globalClock.put(h, IntValue.ValZero));
for (int i = 0; i < list.size(); i++) {
final GraphNode gn = list.get(i);
globalClock.put(host, gn.getTime());
final Value c = gn.getClock();
final ValueEnumeration hosts = gn.getHosts().elements();
Value otherHost = null;
while ((otherHost = hosts.nextElement()) != null) {
final Value time = c.select(new Value[] { otherHost });
if (globalClock.get(otherHost).compareTo(time) < 0) {
globalClock.put(otherHost, time);
int idx = ((IntValue) time).val - 1;
gn.addParent(n2l.get(otherHost).get(idx));
}
}
}
}
// ----------------------------------------------------------------------- //
// C) Pop one of the DAG's roots (zero in-degree) and append it to the list. We
// know that at least one head of lists will be a root.
final List<Value> sorted = new ArrayList<>(v.elems.length);
int i = 0;
while (i++ < v.elems.length) {
for (Value host : n2l.keySet()) {
final LinkedList<GraphNode> list = n2l.get(host);
if (list.isEmpty()) {
continue;
}
if (!list.peek().hasParents()) {
final GraphNode g = list.remove();
sorted.add(g.delete());
}
}
}
assert sorted.size() == v.elems.length;
return new TupleValue(sorted.toArray(new Value[sorted.size()]));
}
}

@lemmy lemmy added the enhancement New feature or request label Apr 9, 2024
@lemmy lemmy changed the title VectorClock!CausalOrder does not tolerate lost messages VectorClock!CausalOrder does not tolerate message loss Apr 9, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Development

No branches or pull requests

1 participant