profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/mkskeller/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.

data61/MP-SPDZ 338

Versatile framework for multi-party computation

anderspkd/SecureQ8 9

Input scripts for securely evaluating quantized ImageNet models with mp-spdz

mkskeller/mpc-benchmarks 8

Benchmarks for various multi-party computation frameworks

mkskeller/SPDZ-Yao 8

Yao's garbled circuit computation of SPDZ-2 code

mkskeller/SimpleOT 4

The Simplest Oblivious Transfer Protocol by Chou and Orlandi. http://users-cs.au.dk/orlandi/simpleOT/

anderspkd/TFLite-interpreter 1

A small program for playing around with the TFLite format and quantization

issue commentdata61/MP-SPDZ

How to compile C++ code which includes MP-SPDZ header file?

You will to have to include the MP-SPDZ root directory: g++ -I<root>

nianer

comment created time in a day

issue commentdata61/MP-SPDZ

Trying to understand this memory overflow issue

Memory refers the virtual machine memory that is used for container types such as Array. It is allocated based on allocation demands during compilation. The error probably means that either you try access an array out of bounds unless you use the lower-level load_mem() or store_in_mem() calls. The latest version introduces further checks that should tell more about array out-of-bound errors.

mence40

comment created time in a day

issue commentdata61/MP-SPDZ

svd in LR

I'm not familiar with SVD algorithms. In any case, you will need to rewrite any algorithm to the MP-SPDZ interface, which is documented here: https://mp-spdz.readthedocs.io/en/latest/Compiler.html#Compiler.types.Matrix

neganasiri93

comment created time in 3 days

issue commentdata61/MP-SPDZ

svd in LR

I don't know what you mean.

neganasiri93

comment created time in 4 days

delete branch data61/MP-SPDZ

delete branch : scan

delete time in 6 days

push eventdata61/MP-SPDZ

Marcel Keller

commit sha 8e87e5c9e6617d27c343a600291d25ff1c9c0f13

Set up scanning.

view details

push time in 6 days

push eventdata61/MP-SPDZ

Marcel Keller

commit sha 27764081bad60f14fecd60ded488219fb26862e4

Set up scanning.

view details

push time in 6 days

create barnchdata61/MP-SPDZ

branch : scan

created branch time in 6 days

issue commentdata61/MP-SPDZ

How to print the result of vector multiplication?

Printing of sfix vectors isn't supported. You can fix this by creating a list: print_ln('c:%s', list(c.reveal()))

hangtian-123

comment created time in 7 days

issue commentdata61/MP-SPDZ

input too large for a 31-bit signed integer: 8090812416

8090812416 is the integer after conversion to 16-bit fixed-point representation: 8090812416 = 123456 * 2^16

nianer

comment created time in 7 days

issue commentdata61/MP-SPDZ

input too large for a 31-bit signed integer: 8090812416

The precision of sfix is not influenced by the -B parameter but by sfix.set_precision() instead. 123456 doesn't fit the default precision of sfix: https://mp-spdz.readthedocs.io/en/latest/Compiler.html#Compiler.types.sfix

nianer

comment created time in 7 days

issue commentdata61/MP-SPDZ

Optimising dot product

Your uses a communication round for every pair of inputs. If it's possible to lock in the size at compile time you can use vec0.input_from(0) etc. Generally, the compiler optimizations depends on lengths being fixed at compile time. A workaround for variables lengths would be the parties inputting zero up to max_vec_size.

psrkiran

comment created time in 8 days

push eventmkskeller/CrypTen

Marcel Keller

commit sha 60eb035ec07740767c1fb395274bb840dfdb49e1

Avoid repeated dataset downloads.

view details

Marcel Keller

commit sha fd1b971dcb998bde830eadd976150c5877643213

Option for Fashion MNIST.

view details

push time in 8 days

issue commentdata61/MP-SPDZ

The experimental data of comparison differs a lot from the theoretical value.

This is described in the readme: https://github.com/data61/MP-SPDZ#online-only-benchmarking

zzx-QDU

comment created time in 8 days

issue commentdata61/MP-SPDZ

The experimental data of comparison differs a lot from the theoretical value.

There are other kinds of preprocessing such as random bits or daBits.

zzx-QDU

comment created time in 8 days

issue commentdata61/MP-SPDZ

The experimental data of comparison differs a lot from the theoretical value.

Have you varied the batch size using -b? It could be that the bulk of the cost goes towards unused preprocessing such as random bits.

zzx-QDU

comment created time in 8 days

issue commentdata61/MP-SPDZ

sbit.Matrix and sintbit.Matrix

I see. This is most likely because integer matrix multiplication (sintbit) is heavily optimized, where binary matrix multiplication. However, I must emphasize again that the two are not comparable because the result will be different.

zzx-QDU

comment created time in 13 days

issue commentdata61/MP-SPDZ

sbit.Matrix and sintbit.Matrix

  • The two aren't comparable. With sbit, 1+1=0 whereas with sintbit, 1+1=2.
  • Timings heavily depend on the security model, so it's hard to comment on your results without knowing that.
zzx-QDU

comment created time in 13 days

issue commentdata61/MP-SPDZ

svd in LR

This won't work because the underlying computation in MPC is very different to the one used in the Python libraries.

neganasiri93

comment created time in 14 days

issue commentdata61/MP-SPDZ

How to compute the dot product efficiently in a matrix

Z = sfix.Matrix(3, 3)
Z[0][0] = 2.444
Z[0][1] = -2.677
Z[0][2] = 8.345
Z[1][0] = 13.023
Z[1][1] = -3.983
Z[1][2] = 4.667
Z[2][0] = -5.023
Z[2][1] = 4.983
Z[2][2] = 8.121
print_ln('Z=: %s', Z.reveal_list())
sfix.set_precision(32, 63)
print_ln('Z1=: %s', Z.reveal_list())

image In this code, why is the value of Z1 not equal to Z

Changing the precision in the middle of a computation is not supported.

xiyumerry

comment created time in 14 days

issue commentdata61/MP-SPDZ

How to compute the dot product efficiently in a matrix

Can MP-SPDZ protocols, such as comparison, square roots, exponents, etc., be evaluated in matrix units? Because in matrices, to compute a matrix, it's too slow to compute every number in the matrix.

You can compute in parallel using slicing. If A, B, and C are matrices of the same size, the following stores all comparison results in C:

C[:] = A[:] < B[:]
xiyumerry

comment created time in 14 days

issue commentdata61/MP-SPDZ

How to compute the dot product efficiently in a matrix

There are a number of changes such as:

  • The truncation in fixed-point multiplication can be optimized in some settings, such as [1] and [2].
  • Comparison can be computed using mixed circuits [3].
  • Exponentiation allows a wider range of inputs.

[1] https://eprint.iacr.org/2020/1330 [2] https://eprint.iacr.org/2019/131 [3] https://eprint.iacr.org/2020/338

xiyumerry

comment created time in 15 days

push eventmkskeller/CrypTen

Marcel Keller

commit sha 092e23f313905c4dfb1db9b6903332eef391fc07

Inference in smaller batches.

view details

push time in 16 days

push eventmkskeller/CrypTen

Marcel Keller

commit sha bb67018ca43c01f9a346711ec06c2560b7786465

Bug.

view details

Marcel Keller

commit sha e6659f790fe29ff87338a01435e180f31edd6d99

Print average loss.

view details

push time in 16 days

push eventmkskeller/CrypTen

Marcel Keller

commit sha 2fc3f663095065d484e67c176e658c0e2ff6a9d0

Network D.

view details

push time in 16 days

issue commentdata61/MP-SPDZ

How to compute the dot product efficiently in a matrix

I'm not quite sure what you mean. The documentation shows how to run matrix multiplication: https://mp-spdz.readthedocs.io/en/latest/Compiler.html#Compiler.types.Matrix.dot The actual computation depends on the protocol, so there is no general way of explaining it.

xiyumerry

comment created time in 18 days

push eventmkskeller/CrypTen

Marcel Keller

commit sha a52c01c0971970a7588ed680351e382f1e18f907

Match parameters.

view details

push time in 18 days

push eventmkskeller/CrypTen

Marcel Keller

commit sha f71522299412f81d93d8babc9e04202ddbe9731b

Bug in preprocessing.

view details

push time in 18 days

issue closedfacebookresearch/CrypTen

Bug with 2 convolution layers

When modifying the mpc_autograd_cnn example to have two convolutional layers as in the attached patch, I get the following error:

Traceback (most recent call last):
  File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
    self.run()
  File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "~/CrypTen/examples/multiprocess_launcher.py", line 64, in _run_process
    run_process_fn(fn_args)
  File "~/CrypTen/examples/mpc_autograd_cnn/launcher.py", line 92, in _run_experiment
    run_mpc_autograd_cnn(
  File "~/CrypTen/examples/mpc_autograd_cnn/mpc_autograd_cnn.py", line 63, in run_mpc_autograd_cnn
    model = crypten.nn.from_pytorch(model_plaintext, dummy_input)
  File "~/CrypTen/crypten/nn/onnx_converter.py", line 45, in from_pytorch
    f = _from_pytorch_to_bytes(pytorch_model, dummy_input)
  File "~/CrypTen/crypten/nn/onnx_converter.py", line 106, in _from_pytorch_to_bytes
    _export_pytorch_model(f, pytorch_model, dummy_input)
  File "~/CrypTen/crypten/nn/onnx_converter.py", line 131, in _export_pytorch_model
    torch.onnx.export(pytorch_model, dummy_input, f, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/onnx/__init__.py", line 271, in export
    return utils.export(model, args, f, export_params, verbose, training,
  File "/usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py", line 88, in export
    _export(model, args, f, export_params, verbose, training, input_names, output_names,
  File "/usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py", line 694, in _export
    _model_to_graph(model, args, verbose, input_names,
  File "/usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py", line 457, in _model_to_graph
    graph, params, torch_out, module = _create_jit_graph(model, args,
  File "/usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py", line 420, in _create_jit_graph
    graph, torch_out = _trace_and_get_graph_from_model(model, args)
  File "/usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py", line 380, in _trace_and_get_graph_from_model
    torch.jit._get_trace_graph(model, args, strict=False, _force_outplace=False, _return_inputs_states=True)
  File "/usr/local/lib/python3.8/dist-packages/torch/jit/_trace.py", line 1139, in _get_trace_graph
    outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/jit/_trace.py", line 125, in forward
    graph, out = torch._C._create_graph_by_tracing(
  File "/usr/local/lib/python3.8/dist-packages/torch/jit/_trace.py", line 116, in wrapper
    outs.append(self.inner(*trace_inputs))
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 887, in _call_impl
    result = self._slow_forward(*input, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 860, in _slow_forward
    result = self.forward(*input, **kwargs)
  File "~/CrypTen/examples/mpc_autograd_cnn/mpc_autograd_cnn.py", line 177, in forward
    out = self.conv2(x)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 887, in _call_impl
    result = self._slow_forward(*input, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 860, in _slow_forward
    result = self.forward(*input, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/conv.py", line 399, in forward
    return self._conv_forward(input, self.weight, self.bias)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/conv.py", line 395, in _conv_forward
    return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Given groups=1, weight of size [16, 16, 3, 3], expected input[1, 1, 28, 28] to have 16 channels, but got 1 channels instead

Is that a bug in CrypTen?

2conv.txt

closed time in 18 days

mkskeller

issue commentfacebookresearch/CrypTen

Bug with 2 convolution layers

There was mistake in the code.

mkskeller

comment created time in 18 days