profile
viewpoint

CNOCycle/auto-attack 0

Code relative to "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks"

CNOCycle/cleverhans 0

An adversarial example library for constructing attacks, building defenses, and benchmarking both

CNOCycle/Linux-baseImage 0

Install all required tools on Ubuntu

pull request commentfra31/auto-attack

support TF2

@fra31 I have finished TF version's example

There TF1 version's output:

initial accuracy: 97.74%
apgd-ce - 1/10 - 98 out of 1000 successfully perturbed
apgd-ce - 2/10 - 129 out of 1000 successfully perturbed
apgd-ce - 3/10 - 102 out of 1000 successfully perturbed
apgd-ce - 4/10 - 102 out of 1000 successfully perturbed
apgd-ce - 5/10 - 98 out of 1000 successfully perturbed
apgd-ce - 6/10 - 74 out of 1000 successfully perturbed
apgd-ce - 7/10 - 54 out of 1000 successfully perturbed
apgd-ce - 8/10 - 55 out of 1000 successfully perturbed
apgd-ce - 9/10 - 28 out of 1000 successfully perturbed
apgd-ce - 10/10 - 74 out of 774 successfully perturbed
robust accuracy after APGD-CE: 89.60% (total time 47.2 s)
apgd-dlr - 1/9 - 10 out of 1000 successfully perturbed
apgd-dlr - 2/9 - 20 out of 1000 successfully perturbed
apgd-dlr - 3/9 - 15 out of 1000 successfully perturbed
apgd-dlr - 4/9 - 14 out of 1000 successfully perturbed
apgd-dlr - 5/9 - 5 out of 1000 successfully perturbed
apgd-dlr - 6/9 - 5 out of 1000 successfully perturbed
apgd-dlr - 7/9 - 4 out of 1000 successfully perturbed
apgd-dlr - 8/9 - 8 out of 1000 successfully perturbed
apgd-dlr - 9/9 - 10 out of 960 successfully perturbed
robust accuracy after APGD-DLR: 88.69% (total time 105.3 s)
fab - 1/9 - 0 out of 1000 successfully perturbed
fab - 2/9 - 1 out of 1000 successfully perturbed
fab - 3/9 - 2 out of 1000 successfully perturbed
fab - 4/9 - 3 out of 1000 successfully perturbed
fab - 5/9 - 0 out of 1000 successfully perturbed
fab - 6/9 - 3 out of 1000 successfully perturbed
fab - 7/9 - 1 out of 1000 successfully perturbed
fab - 8/9 - 0 out of 1000 successfully perturbed
fab - 9/9 - 2 out of 869 successfully perturbed
robust accuracy after FAB: 88.57% (total time 492.1 s)
square - 1/9 - 31 out of 1000 successfully perturbed
square - 2/9 - 36 out of 1000 successfully perturbed
square - 3/9 - 26 out of 1000 successfully perturbed
square - 4/9 - 34 out of 1000 successfully perturbed
square - 5/9 - 20 out of 1000 successfully perturbed
square - 6/9 - 15 out of 1000 successfully perturbed
square - 7/9 - 20 out of 1000 successfully perturbed
square - 8/9 - 14 out of 1000 successfully perturbed
square - 9/9 - 14 out of 857 successfully perturbed
robust accuracy after SQUARE: 86.47% (total time 727.9 s)
max Linf perturbation: 0.30000, nan in tensor: 0, max: 1.00000, min: 0.00000
robust accuracy: 86.47%

There is TF2 version's output

initial accuracy: 97.74%
apgd-ce - 1/10 - 101 out of 1000 successfully perturbed
apgd-ce - 2/10 - 127 out of 1000 successfully perturbed
apgd-ce - 3/10 - 106 out of 1000 successfully perturbed
apgd-ce - 4/10 - 106 out of 1000 successfully perturbed
apgd-ce - 5/10 - 96 out of 1000 successfully perturbed
apgd-ce - 6/10 - 73 out of 1000 successfully perturbed
apgd-ce - 7/10 - 53 out of 1000 successfully perturbed
apgd-ce - 8/10 - 55 out of 1000 successfully perturbed
apgd-ce - 9/10 - 31 out of 1000 successfully perturbed
apgd-ce - 10/10 - 76 out of 774 successfully perturbed
robust accuracy after APGD-CE: 89.50% (total time 133.7 s)
apgd-dlr - 1/9 - 8 out of 1000 successfully perturbed
apgd-dlr - 2/9 - 21 out of 1000 successfully perturbed
apgd-dlr - 3/9 - 6 out of 1000 successfully perturbed
apgd-dlr - 4/9 - 13 out of 1000 successfully perturbed
apgd-dlr - 5/9 - 4 out of 1000 successfully perturbed
apgd-dlr - 6/9 - 9 out of 1000 successfully perturbed
apgd-dlr - 7/9 - 6 out of 1000 successfully perturbed
apgd-dlr - 8/9 - 4 out of 1000 successfully perturbed
apgd-dlr - 9/9 - 7 out of 950 successfully perturbed
robust accuracy after APGD-DLR: 88.72% (total time 297.1 s)
fab - 1/9 - 0 out of 1000 successfully perturbed
fab - 2/9 - 2 out of 1000 successfully perturbed
fab - 3/9 - 1 out of 1000 successfully perturbed
fab - 4/9 - 3 out of 1000 successfully perturbed
fab - 5/9 - 0 out of 1000 successfully perturbed
fab - 6/9 - 3 out of 1000 successfully perturbed
fab - 7/9 - 1 out of 1000 successfully perturbed
fab - 8/9 - 0 out of 1000 successfully perturbed
fab - 9/9 - 2 out of 872 successfully perturbed
robust accuracy after FAB: 88.60% (total time 1808.1 s)
square - 1/9 - 28 out of 1000 successfully perturbed
square - 2/9 - 35 out of 1000 successfully perturbed
square - 3/9 - 33 out of 1000 successfully perturbed
square - 4/9 - 33 out of 1000 successfully perturbed
square - 5/9 - 21 out of 1000 successfully perturbed
square - 6/9 - 12 out of 1000 successfully perturbed
square - 7/9 - 19 out of 1000 successfully perturbed
square - 8/9 - 13 out of 1000 successfully perturbed
square - 9/9 - 18 out of 860 successfully perturbed
robust accuracy after SQUARE: 86.48% (total time 2051.4 s)
max Linf perturbation: 0.30000, nan in tensor: 0, max: 1.00000, min: 0.00000
robust accuracy: 86.48%
CNOCycle

comment created time in 2 days

push eventCNOCycle/auto-attack

CNOCycle

commit sha ae9cf9f395c1a5d80b007d34febcde1ac5954ac2

add tf_model_weight.h5

view details

push time in 2 days

push eventCNOCycle/auto-attack

CNOCycle

commit sha cd5a1b3e25392c35fc5a8881e433bfbad0fb0d4d

Create eval_tf2.py

view details

push time in 2 days

push eventCNOCycle/auto-attack

CNOCycle

commit sha 188deaaf2be12098a432b2f51a534fdcfccc3e70

Create eval_tf1.py

view details

push time in 2 days

push eventCNOCycle/auto-attack

CNOCycle

commit sha b86c12ced1c2f8841649d07141f636e2fadd924d

fix format

view details

push time in 2 days

pull request commentfra31/auto-attack

support TF2

It looks good to me. Would it be possible for you to add an example (including any small model with checkpoints) of how to use it (as in https://github.com/fra31/auto-attack/tree/master/examples)?

Sure, I will give TF verion's examples.

CNOCycle

comment created time in 3 days

Pull request review commentfra31/auto-attack

support TF2

+import tensorflow as tf+import numpy as np+import torch++class ModelAdapter():+    def __init__(self, model, num_classes=10):+        """+        Please note that model should be tf.keras model without activation function 'softmax'+        """+        self.num_classes = num_classes+        self.tf_model = model+        self.__check_channel_ordering()++    def __check_channel_ordering(self):+        +        for L in self.tf_model.layers:+            if isinstance(L, tf.keras.layers.Conv2D):+                print("[INFO] set data_format = '{:s}'".format(L.data_format))+                self.data_format = L.data_format+                return++        print("[INFO] Can not find Conv2D layer")+        input_shape = self.tf_model.input_shape++        if input_shape[3] == 3:+            print("[INFO] Because detecting input_shape[3] == 3, set data_format = 'channels_last'")+            self.data_format = 'channels_last'++        elif input_shape[3] == 1:+            print("[INFO] Because detecting input_shape[3] == 1, set data_format = 'channels_last'")+            self.data_format = 'channels_last'++        else:+            print("[INFO] set data_format = 'channels_first'")+            self.data_format = 'channels_first'++    def _get_logits(self, x_input):+        logits = self.tf_model(x_input, training=False)+        return logits++    def _get_jacobian(self, x_input):+        with tf.GradientTape(watch_accessed_variables=False) as g:+            g.watch(x_input)+            logits = self._get_logits(x_input)++        jacobian = g.batch_jacobian(logits, x_input)++        if self.data_format == 'channels_last':+            jacobian = tf.transpose(jacobian, perm=[0,1,4,2,3])++        return jacobian++    def _get_xent(self, logits, y_input):+        xent   = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=y_input)+        return xent++    def _get_grad_xent(self, x_input, y_input):+        with tf.GradientTape(watch_accessed_variables=False) as g:+            g.watch(x_input)+            logits = self._get_logits(x_input)+            xent = self._get_xent(logits, y_input)+        +        grad_xent = g.gradient(xent, x_input)++        return logits, xent, grad_xent++    def _get_dir(self, logits, y_input):

OK, I will fix format latter.

CNOCycle

comment created time in 3 days

push eventCNOCycle/auto-attack

CNOCycle

commit sha c285fe06bd76e70d01ea1879ef433c73b3e47b4a

improving function `__check_channel_ordering`

view details

push time in 15 days

push eventCNOCycle/auto-attack

CNOCycle

commit sha 88c4b66417aabece98a27d71cac83d6d265d5603

update README.md

view details

push time in 15 days

push eventCNOCycle/auto-attack

CNOCycle

commit sha b431283e7c7425f928b0c56c36fd9c5635fc94f3

fix for jacobian's shape

view details

push time in 15 days

PR opened fra31/auto-attack

support TF2

solve issue #8

  1. support TF2's keras model

  2. detect model's channels ordering and convert ordering automatically

+152 -0

0 comment

1 changed file

pr created time in 15 days

push eventCNOCycle/auto-attack

CNOCycle

commit sha c369755950bfda14ccf164fa0517198dbf159ab7

support TF2

view details

push time in 15 days

fork CNOCycle/auto-attack

Code relative to "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks"

https://arxiv.org/abs/2003.01690

fork in 15 days

push eventCNOCycle/leetcode

cnocycle

commit sha 2589fca9d28d37978ce1b25bdc8f76312381714f

add q5453.cpp

view details

cnocycle

commit sha 85dc61108edd97033cc7d3050207b77a1d092a5a

add q0041.cpp

view details

push time in 23 days

push eventCNOCycle/leetcode

cnocycle

commit sha c6e6b1ad31052abe62dde95e8633bc2593e3edeb

fix bug in q0001.cpp

view details

cnocycle

commit sha 4c21087ab2f5d857240648639ff4a5cbf1ab2878

add q0096.cpp

view details

cnocycle

commit sha 106da313e162309b1edf7278cd674be7e2776807

add q0843.cpp

view details

push time in 23 days

issue closedtianzheng4/Distributionally-Adversarial-Attack

Divide by zero error encountered in file mi_pgd_rand.py

Hi authors,

I'm verifying my model's robustness by the attacks in the repository.

But It reports a divide by zero error in following line

https://github.com/tianzheng4/Distributionally-Adversarial-Attack/blob/0f119905caa5be72799929ae0725c6923838bab3/mnist_challenge-master/mi_pgd_rand.py#L64

Adding a small epsilon may solve this issue.

closed time in a month

CNOCycle

issue commenttensorflow/tensorflow

`tf.keras.models.clone_model` does not support custom model

An alternative solution is:

wrap_model = Composite(inputs=new_model.input, outputs=new_model.output) 
wrap_model.compile(loss='binary_crossentropy',optimizer='SGD', metrics=['accuracy'])
wrap_model.fit(X,Y,verbose=2)

I don't think this is good solution but it is an acceptable choice.

If tf.keras.models.clone_model can not support custom model. It should be emphasized in the document.

CNOCycle

comment created time in a month

issue openedfra31/auto-attack

pytoch API `view` may not work if tensors are non-contiguous

Hi authors,

I'm not familiar with pytorch, I got some errors occasionally.

It complaint that tensor is not non-contiguous and suggest that reshape is better than view.

https://github.com/fra31/auto-attack/blob/0185c7930e5c535ff3380197c54c74ba916f449b/fab_tf.py#L394-L407

I also found that the code is not unified. As you seen, L397,L401 is view but L404 is reshape.

An alternative solution is calling .contiguous() before .view(...). Or view should be replace by reshape

I'm not sure which solution is suitable in this project.

Any suggestion?

created time in a month

issue openedtensorflow/tensorflow

`tf.keras.models.clone_model` does not support custom model

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): Yes
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 18.04
  • Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: N/A
  • TensorFlow installed from (source or binary): pip
  • TensorFlow version (use command below): 2.2.0
  • Python version: Python 3.6.10 :: Anaconda, Inc.
  • Bazel version (if compiling from source): N/A
  • GCC/Compiler version (if compiling from source): N/A
  • CUDA/cuDNN version: CUDA 10.1
  • GPU model and memory: GeForce RTX 2080 Ti

Describe the current behavior

In TF2.2.0 release notes, it said

  • You can now use custom training logic with Model.fit by overriding Model.train_step
  • Easily write state-of-the-art training loops without worrying about all of the features Model.fit handles for you (distribution strategies, callbacks, data formats, looping logic, etc)

I have implemented my own custom model whose train_step is overwrited, And I want to create an identical model by API tf.keras.models.clone_model.

But, the problem there is that my custom train_step is gone.

Describe the expected behavior

tf.keras.models.clone_model should copy not only model's layers but also train_step.

Standalone code to reproduce the issue

#%%
import numpy as np
import tensorflow as tf
print(tf.__version__)

#%% 
class Composite(tf.keras.Model):
    def __init__(self, *args, **kwargs):

        super(Composite, self).__init__(*args, **kwargs)

    def train_step(self, data):

        data_adapter = tf.python.keras.engine.data_adapter
        data = data_adapter.expand_1d(data)
        x, y, sample_weight = data_adapter.unpack_x_y_sample_weight(data)

        tf.print("HIHI! I'm in function train_step!")

        with tf.GradientTape() as tape:
            y_pred = self(x, training=True)
            loss = self.compiled_loss(
                y, y_pred, sample_weight, regularization_losses=self.losses)

        _minimize = tf.python.keras.engine.training._minimize
        _minimize(self.distribute_strategy, tape, self.optimizer, loss,
                self.trainable_variables)

        self.compiled_metrics.update_state(y, y_pred, sample_weight)
        return {m.name: m.result() for m in self.metrics}

in_ = tf.keras.layers.Input(shape=(10, ) )
x = tf.keras.layers.Dense(1)(in_)
model = Composite(inputs=in_, outputs=x)
model.compile(loss='binary_crossentropy',optimizer='SGD', metrics=['accuracy'])

X = np.zeros((10,10))
Y = np.zeros((10,1))
model.fit(X,Y,verbose=2)

# %%
new_model = tf.keras.models.clone_model(model)
new_model.compile(loss='binary_crossentropy',optimizer='SGD', metrics=['accuracy'])
new_model.fit(X,Y,verbose=2)

Other info / logs

there is the original model
HIHI! I'm in function train_step!
1/1 - 0s - loss: 0.0000e+00 - accuracy: 1.0000
<tensorflow.python.keras.callbacks.History at 0x7fa9603dd6a0>
there is the NEW model
1/1 - 0s - loss: 0.0000e+00 - accuracy: 1.0000
<tensorflow.python.keras.callbacks.History at 0x7fa96023bbe0>

From the console, you will see the output HIHI! I'm in function train_step! is gone when I run new_model.fit(X,Y,verbose=2)

created time in a month

issue openedfra31/auto-attack

TF2 implementation

Hi authors,

I sincerely thank all authors for their time and efforts. Auto-attack is a powerful tool which helping me checking defense's robustness.

After reading the source code, I only fond APIs for TF1. The latest version of TF1(tf-1.15.0) published in 6 months ago. I thought that APIs should be upgraded to support TF2.

I am willing to implement APIs for TF2 if necessary.

Thanks

created time in a month

create barnchCNOCycle/leetcode

branch : master

created branch time in 2 months

created repositoryCNOCycle/leetcode

created time in 2 months

push eventCNOCycle/cleverhans

cnocycle

commit sha b103b0caf983331ec9c5f7c3e08e417a85bf25e2

fix spsa

view details

push time in 3 months

PR opened tensorflow/cleverhans

[TF2] spsa
  1. fix for data type
  2. set training=False
  3. fix for gradient issue #1087
+7 -3

0 comment

1 changed file

pr created time in 3 months

create barnchCNOCycle/cleverhans

branch : tf2/spsa

created branch time in 3 months

more