profile
viewpoint

Azure/azure-event-hubs-python 51

Python client library for Azure Event Hubs

annatisch/azure-event-hubs-python 1

Python client library for Azure Event Hubs

annatisch/azure-uamqp-python 1

AMQP 1.0 client library for Python

annatisch/presentations 1

Presentation slides and code

AutorestCI/azure-sdk-for-python 1

Microsoft Azure SDK for Python

yunhaoling/azure-sdk-for-python 1

Microsoft Azure SDK for Python

annatisch/autorest.python 0

Extension for AutoRest (https://github.com/Azure/autorest) that generates Python code

annatisch/azure-batch-maya 0

Cloud rendering from Maya using Azure Batch

annatisch/azure-c-shared-utility 0

Azure C SDKs common code

issue commentAzure/azure-sdk-for-python

Cosmos: query_items_change_feed reture same resualt with is_start_from_beginning parameters

Thanks @nonokangwei - I will try to repro today :)

nonokangwei

comment created time in 4 hours

Pull request review commentAzure/azure-sdk-for-python

[ServiceBus] Track2 - Dead Letter Queue Receiver Implementation and Dead Letter Fix

 def from_connection_string(         :keyword dict http_proxy: HTTP proxy settings. This must be a dictionary with the following          keys: `'proxy_hostname'` (str value) and `'proxy_port'` (int value).          Additionally the following keys may also be present: `'username', 'password'`.-        :rtype: ~azure.servicebus.ServiceBusReceiverClient+        :keyword bool is_dead_letter_receiver: Should this receiver connect to the dead-letter-queue associated

Sounds good!

yunhaoling

comment created time in 3 days

Pull request review commentAzure/azure-sdk-for-python

[Blob][QuickQuery]Add Quick Query Support

 def _from_generated(cls, generated):             )             return scope         return None+++class DelimitedTextConfiguration(GenDelimitedTextConfiguration):+    """Defines the input or output delimited (CSV) serialization for a blob quick query request.++    :keyword str column_separator: Required. column separator+    :keyword str field_quote: Required. field quote+    :keyword str record_separator: Required. record separator+    :keyword str escape_char: Required. escape char+    :keyword bool headers_present: Required. has headers+    """+    def __init__(self, **kwargs):

Docstring says there's required parameters....

xiafu-msft

comment created time in 3 days

Pull request review commentAzure/azure-sdk-for-python

[Blob][QuickQuery]Add Quick Query Support

 def _from_generated(cls, generated):             )             return scope         return None+++class DelimitedTextConfiguration(GenDelimitedTextConfiguration):+    """Defines the input or output delimited (CSV) serialization for a blob quick query request.++    :keyword str column_separator: Required. column separator+    :keyword str field_quote: Required. field quote+    :keyword str record_separator: Required. record separator+    :keyword str escape_char: Required. escape char+    :keyword bool headers_present: Required. has headers+    """+    def __init__(self, **kwargs):+        field_quote = kwargs.pop('field_quote', '"')+        escape_char = kwargs.pop('escape_char', "")+        super(DelimitedTextConfiguration, self).__init__(field_quote=field_quote,+                                                         escape_char=escape_char,+                                                         **kwargs)+++class JsonTextConfiguration(GenJsonTextConfiguration):+    """Defines the input or output JSON serialization for a blob quick query request.++    :keyword str record_separator: Required. record separator

Docstring says there's a required parameter?

xiafu-msft

comment created time in 3 days

Pull request review commentAzure/azure-sdk-for-python

[Blob][QuickQuery]Add Quick Query Support

     ContainerSasPermissions,     BlobSasPermissions,     CustomerProvidedEncryptionKey,-    ContainerEncryptionScope+    ContainerEncryptionScope,+    QuickQueryError,

Can we rename both QuickQueryError and QuickQueryReader?

xiafu-msft

comment created time in 3 days

Pull request review commentAzure/azure-sdk-for-python

[Blob][QuickQuery]Add Quick Query Support

 def download_blob(self, offset=None, length=None, **kwargs):             **kwargs)         return StorageStreamDownloader(**options) +    def _quick_query_options(self, query_expression,+                             **kwargs):+        # type: (str, **Any) -> Dict[str, Any]+        input_serialization = kwargs.pop('input_serialization', None)+        output_serialization = kwargs.pop('output_serialization', None)+        query_request = QueryRequest(expression=query_expression,+                                     input_serialization=get_quick_query_serialization_info(input_serialization),+                                     output_serialization=get_quick_query_serialization_info(output_serialization))+        access_conditions = get_access_conditions(kwargs.pop('lease', None))+        mod_conditions = get_modify_conditions(kwargs)++        cpk = kwargs.pop('cpk', None)+        cpk_info = None+        if cpk:+            if self.scheme.lower() != 'https':+                raise ValueError("Customer provided encryption key must be used over HTTPS.")+            cpk_info = CpkInfo(encryption_key=cpk.key_value, encryption_key_sha256=cpk.key_hash,+                               encryption_algorithm=cpk.algorithm)+        options = {+            'query_request': query_request,+            'lease_access_conditions': access_conditions,+            'modified_access_conditions': mod_conditions,+            'cpk_info': cpk_info,+            'progress_callback': kwargs.pop('progress_callback', None),+            'snapshot': self.snapshot,+            'timeout': kwargs.pop('timeout', None),+            'cls': return_headers_and_deserialized,+            'client': self._client,+            'name': self.blob_name,+            'container': self.container_name}+        options.update(kwargs)+        return options++    @distributed_trace+    def query(self, query_expression,  # type: str

I think this should be def query_blobs(self, query, **kwargs)

Thoughts?

xiafu-msft

comment created time in 3 days

Pull request review commentAzure/azure-sdk-for-python

[Blob][STG73]Blob Tags

 def get_api_version(kwargs, default):         versions = '\n'.join(_SUPPORTED_API_VERSIONS)         raise ValueError("Unsupported API version '{}'. Please select from:\n{}".format(api_version, versions))     return api_version or default+++def serialize_blob_tags_header(tags=None):+    # type: (Optional[Dict[str, str]]) -> str+    components = list()+    if tags:+        for key, value in tags.items():+            components.append(quote(key, safe='.-'))+            components.append('=')+            components.append(quote(value, safe='.-'))+            components.append('&')++    if components:+        del components[-1]++    return ''.join(components)+++def serialize_blob_tags(tags=None):+    # type: (Optional[Dict[str, str]]) -> Union[BlobTags, None]+    tag_list = list()+    if tags:+        for tag_key, tag_value in tags.items():+            tag_list.append(BlobTag(key=tag_key, value=tag_value))
if tags:
    tag_list = [BlobTag(key=k, value=v for k, v in tags.items()]
xiafu-msft

comment created time in 3 days

Pull request review commentAzure/azure-sdk-for-python

[Blob][STG73]Blob Tags

 def get_api_version(kwargs, default):         versions = '\n'.join(_SUPPORTED_API_VERSIONS)         raise ValueError("Unsupported API version '{}'. Please select from:\n{}".format(api_version, versions))     return api_version or default+++def serialize_blob_tags_header(tags=None):+    # type: (Optional[Dict[str, str]]) -> str+    components = list()+    if tags:+        for key, value in tags.items():+            components.append(quote(key, safe='.-'))+            components.append('=')+            components.append(quote(value, safe='.-'))+            components.append('&')++    if components:+        del components[-1]++    return ''.join(components)

If you use '&'.join(components) then you wont have to delete it off the end in the line above :)

xiafu-msft

comment created time in 3 days

Pull request review commentAzure/azure-sdk-for-python

[Blob][STG73]Blob Tags

 def _from_generated(cls, generated):         blob.blob_tier_inferred = generated.properties.access_tier_inferred         blob.archive_status = generated.properties.archive_status         blob.blob_tier_change_time = generated.properties.access_tier_change_time+        blob.tag_count = generated.properties.tag_count+        blob.tags = blob._parse_tags(generated.blob_tags)  # pylint: disable=protected-access         return blob +    @staticmethod+    def _parse_tags(generated_tags):+        # type: (Optional[List[BlobTag]]) -> Union[Dict[str, str], None]+        """Deserialize a list of BlobTag objects into a dict.+        """+        if generated_tags:+            tag_dict = dict()+            for blob_tag in generated_tags.blob_tag_set:+                tag_dict[blob_tag.key] = blob_tag.value+            return tag_dict
if generated_tags:
    tags = {t.key: t.value for t in generated_tags.blob_tag_set}
xiafu-msft

comment created time in 3 days

Pull request review commentAzure/azure-sdk-for-python

[Blob][STG73]Blob Tags

 def list_containers(                 page_iterator_class=ContainerPropertiesPaged             ) +    @distributed_trace+    def filter_blobs(self, where=None, **kwargs):+        # type: (Optional[str], Optional[Any], **Any) -> ItemPaged[BlobProperties]+        """The Filter Blobs operation enables callers to list blobs across all+        containers whose tags match a given search expression.  Filter blobs+        searches across all containers within a storage account but can be+        scoped within the expression to a single container.++        :param str where:+            Filters the results to return only to return only blobs+            whose tags match the specified expression.+        :keyword int results_per_page:+            The max result per page when paginating.+        :keyword int timeout:+            The timeout parameter is expressed in seconds.+        :returns: An iterable (auto-paging) response of BlobProperties.+        :rtype: ~azure.core.paging.ItemPaged[~azure.storage.blob.BlobProperties]

Should this be ~azure.core.paging.ItemPaged[~azure.storage.blob.FilteredBlob]?

xiafu-msft

comment created time in 3 days

Pull request review commentAzure/azure-sdk-for-python

[Blob][STG73]Blob Tags

 def list_containers(                 page_iterator_class=ContainerPropertiesPaged             ) +    @distributed_trace+    def filter_blobs(self, where=None, **kwargs):+        # type: (Optional[str], Optional[Any], **Any) -> ItemPaged[BlobProperties]+        """The Filter Blobs operation enables callers to list blobs across all+        containers whose tags match a given search expression.  Filter blobs+        searches across all containers within a storage account but can be+        scoped within the expression to a single container.++        :param str where:+            Filters the results to return only to return only blobs+            whose tags match the specified expression.

Duplicate "to return only"

xiafu-msft

comment created time in 3 days

Pull request review commentAzure/azure-sdk-for-python

[Blob][STG73]Blob Tags

 def list_containers(                 page_iterator_class=ContainerPropertiesPaged             ) +    @distributed_trace+    def filter_blobs(self, where=None, **kwargs):

What does it mean if where=None? Is that the same behaivour as "list all blobs in all containers"?

xiafu-msft

comment created time in 3 days

Pull request review commentAzure/azure-sdk-for-python

[Blob][STG73]Blob Tags

 def set_premium_page_blob_tier(self, premium_page_blob_tier, **kwargs):         except StorageErrorException as error:             process_storage_error(error) +    def _set_blob_tags_options(self, tags=None, **kwargs):+        # type: (Optional[Dict[str, str]], **Any) -> Dict[str, Any]+        headers = kwargs.pop('headers', {})++        tags = serialize_blob_tags(tags)++        options = {+            'tags': tags,+            'version_id': kwargs.pop('version_id', None),+            'timeout': kwargs.pop('timeout', None),+            'cls': return_response_headers,+            'headers': headers}+        options.update(kwargs)+        return options++    @distributed_trace+    def set_blob_tags(self, tags=None, **kwargs):+        # type: (Optional[Dict[str, str]], **Any) -> Dict[str, Any]+        """The Set Tags operation enables users to set tags on a blob or specific blob version, but not snapshot.+            Each call to this operation replaces all existing tags attached to the blob. To remove all+            tags from the blob, call this operation with no tags set.++        .. versionadded:: 12.4.0+            This operation was introduced in API version '2019-12-12'.++        :param tags:+            Name-value pairs associated with the blob as tag. Tags are case-sensitive.+        :type tags: dict(str, str)+        :keyword str version_id:+            The version id parameter is an opaque DateTime+            value that, when present, specifies the version of the blob to add tags to.+        :keyword bool validate_content:+            If true, calculates an MD5 hash of the tags content. The storage+            service checks the hash of the content that has arrived+            with the hash that was sent. This is primarily valuable for detecting+            bitflips on the wire if using http instead of https, as https (the default),+            will already validate. Note that this MD5 hash is not stored with the+            blob.+        :keyword int timeout:+            The timeout parameter is expressed in seconds.+        :returns: Blob-updated property dict (Etag and last modified)+        :rtype: Dict[str, Any]+        """+        options = self._set_blob_tags_options(tags=tags, **kwargs)+        try:+            return self._client.blob.set_tags(**options)+        except StorageErrorException as error:+            process_storage_error(error)++    def _get_blob_tags_options(self, **kwargs):+        # type: (**Any) -> Dict[str, str]++        options = {+            'version_id': kwargs.pop('version_id', None),+            'snapshot': self.snapshot,+            'timeout': kwargs.pop('timeout', None),+            'cls': return_headers_and_deserialized}+        return options++    @distributed_trace+    def get_blob_tags(self, **kwargs):+        # type: (**Any) -> Dict[str, str]+        """The Get Tags operation enables users to get tags on a blob or specific blob version, but not snapshot.++        .. versionadded:: 12.4.0+            This operation was introduced in API version '2019-12-12'.++        :keyword str version_id:+            If true, calculates an MD5 hash of the tags content. The storage

This looks like the docstring for validate_content rather than version id

xiafu-msft

comment created time in 3 days

Pull request review commentAzure/azure-sdk-for-python

[Blob][STG73]Blob Tags

 def set_premium_page_blob_tier(self, premium_page_blob_tier, **kwargs):         except StorageErrorException as error:             process_storage_error(error) +    def _set_blob_tags_options(self, tags=None, **kwargs):+        # type: (Optional[Dict[str, str]], **Any) -> Dict[str, Any]+        headers = kwargs.pop('headers', {})++        tags = serialize_blob_tags(tags)++        options = {+            'tags': tags,+            'version_id': kwargs.pop('version_id', None),+            'timeout': kwargs.pop('timeout', None),+            'cls': return_response_headers,+            'headers': headers}+        options.update(kwargs)+        return options++    @distributed_trace+    def set_blob_tags(self, tags=None, **kwargs):+        # type: (Optional[Dict[str, str]], **Any) -> Dict[str, Any]+        """The Set Tags operation enables users to set tags on a blob or specific blob version, but not snapshot.+            Each call to this operation replaces all existing tags attached to the blob. To remove all+            tags from the blob, call this operation with no tags set.++        .. versionadded:: 12.4.0+            This operation was introduced in API version '2019-12-12'.++        :param tags:+            Name-value pairs associated with the blob as tag. Tags are case-sensitive.+        :type tags: dict(str, str)+        :keyword str version_id:+            The version id parameter is an opaque DateTime+            value that, when present, specifies the version of the blob to add tags to.+        :keyword bool validate_content:+            If true, calculates an MD5 hash of the tags content. The storage+            service checks the hash of the content that has arrived+            with the hash that was sent. This is primarily valuable for detecting+            bitflips on the wire if using http instead of https, as https (the default),+            will already validate. Note that this MD5 hash is not stored with the+            blob.+        :keyword int timeout:+            The timeout parameter is expressed in seconds.+        :returns: Blob-updated property dict (Etag and last modified)+        :rtype: Dict[str, Any]+        """+        options = self._set_blob_tags_options(tags=tags, **kwargs)+        try:+            return self._client.blob.set_tags(**options)+        except StorageErrorException as error:+            process_storage_error(error)++    def _get_blob_tags_options(self, **kwargs):+        # type: (**Any) -> Dict[str, str]++        options = {+            'version_id': kwargs.pop('version_id', None),+            'snapshot': self.snapshot,

In the docstring below it says we can't get the tags for a snapshot - yet we're passing it in here?

xiafu-msft

comment created time in 3 days

Pull request review commentAzure/azure-sdk-for-python

[Blob][STG73]Blob Tags

 def set_premium_page_blob_tier(self, premium_page_blob_tier, **kwargs):         except StorageErrorException as error:             process_storage_error(error) +    def _set_blob_tags_options(self, tags=None, **kwargs):+        # type: (Optional[Dict[str, str]], **Any) -> Dict[str, Any]+        headers = kwargs.pop('headers', {})++        tags = serialize_blob_tags(tags)++        options = {+            'tags': tags,+            'version_id': kwargs.pop('version_id', None),+            'timeout': kwargs.pop('timeout', None),+            'cls': return_response_headers,+            'headers': headers}

I don't think we need to pop timeout, verion_id or headers - these will automatically be passed through with kwargs

xiafu-msft

comment created time in 3 days

Pull request review commentAzure/azure-sdk-for-python

[Blob][STG73]Blob Tags

 def _build_item(self, item):         return item  +class FilteredBlob(FilterBlobItem):+    """Blob info from a Filter Blobs API call.++    :ivar name: Blob name+    :type name: str+    :ivar container_name: Container name.+    :type container_name: str+    :ivar tag_value: tag value filtered by the expression.+    :type tag_value: str+    """+    def __init__(self, **kwargs):  # pylint:disable=useless-super-delegation+        super(FilteredBlob, self).__init__(**kwargs)

No in this case we don't need to override the constructor, it's fine to just use the parent constructor directly. If we ever need to add new/required parameters to the constructor we can add the explicit override. So it will just be a class name with a docstring :)

xiafu-msft

comment created time in 3 days

Pull request review commentAzure/azure-sdk-for-python

[Blob][STG73]Blob Tags

 def upload_blob(  # pylint: disable=too-many-locals         :param metadata:             Name-value pairs associated with the blob as metadata.         :type metadata: dict(str, str)+        :keyword blob_tags:+            Name-value pairs associated with the blob as tag.+        :paramtype blob_tags: dict(str, str)

Yes I think paramtype is correct.

xiafu-msft

comment created time in 3 days

Pull request review commentAzure/azure-sdk-for-python

[Storage][Blob] Added support for Object Replication

 def deserialize_blob_properties(response, obj, headers):     return blob_properties  +def deserialize_ors_policies(response):+    # For source blobs (blobs that have policy ids and rule ids applied to them),+    # the header will be formatted as "x-ms-or-<policy_id>_<rule_id>: {Complete, Failed}".+    # The value of this header is the status of the replication.+    or_policy_status_headers = {key: val for key, val in response.headers.items()+                                if key.startswith('x-ms-or') and key != 'x-ms-or-policy-id'}++    parsed_result = {}++    for key, val in or_policy_status_headers.items():+        policy_and_rule_ids = key[len('x-ms-or-'):].split('_')+        policy_id = policy_and_rule_ids[0]+        rule_id = policy_and_rule_ids[1]++        # we are seeing this policy for the first time, so a new rule_id -> result dict is needed+        if parsed_result.get(policy_id) is None:+            parsed_result[policy_id] = {rule_id: val}+        else:+            parsed_result.get(policy_id)[rule_id] = val

It is more efficient to do: parsed_result[policy_id][rule_id] = val

zezha-msft

comment created time in 3 days

Pull request review commentAzure/azure-sdk-for-python

[Storage][Blob] Added support for Object Replication

+# coding: utf-8++# -------------------------------------------------------------------------+# Copyright (c) Microsoft Corporation. All rights reserved.+# Licensed under the MIT License. See License.txt in the project root for+# license information.+# --------------------------------------------------------------------------+import pytest+from _shared.testcase import StorageTestCase, GlobalStorageAccountPreparer++from azure.storage.blob import (+    BlobServiceClient,+    BlobType,+    BlobProperties,+)++from azure.storage.blob._deserialize import deserialize_ors_policies+++class StorageObjectReplicationTest(StorageTestCase):

Could we get async copies of these tests as well?

zezha-msft

comment created time in 3 days

Pull request review commentAzure/azure-sdk-for-python

[Storage][Blob] Added support for Object Replication

 def deserialize_blob_properties(response, obj, headers):     return blob_properties  +def deserialize_ors_policies(response):+    # For source blobs (blobs that have policy ids and rule ids applied to them),+    # the header will be formatted as "x-ms-or-<policy_id>_<rule_id>: {Complete, Failed}".+    # The value of this header is the status of the replication.+    or_policy_status_headers = {key: val for key, val in response.headers.items()+                                if key.startswith('x-ms-or') and key != 'x-ms-or-policy-id'}++    parsed_result = {}++    for key, val in or_policy_status_headers.items():+        policy_and_rule_ids = key[len('x-ms-or-'):].split('_')+        policy_id = policy_and_rule_ids[0]+        rule_id = policy_and_rule_ids[1]++        # we are seeing this policy for the first time, so a new rule_id -> result dict is needed+        if parsed_result.get(policy_id) is None:

if policy_id not in parsed_result

zezha-msft

comment created time in 3 days

Pull request review commentAzure/azure-sdk-for-python

[Storage][Blob] Added support for Object Replication

 class BlobProperties(DictMixin):         container-level scope is configured to allow overrides. Otherwise an error will be raised.     :ivar bool request_server_encrypted:         Whether this blob is encrypted.+    :ivar dict(str, dict(str, str)) object_replication_source_properties:+        Only present for blobs that have policy ids and rule ids applied to them.+        Dictionary<policy_id, Dictionary<rule_id, status of replication(Complete,Failed)

Missing closing angle brackets

zezha-msft

comment created time in 3 days

Pull request review commentAzure/azure-sdk-for-python

[Storage][Blob] Added support for Object Replication

 def deserialize_blob_properties(response, obj, headers):     return blob_properties  +def deserialize_ors_policies(response):+    # For source blobs (blobs that have policy ids and rule ids applied to them),+    # the header will be formatted as "x-ms-or-<policy_id>_<rule_id>: {Complete, Failed}".+    # The value of this header is the status of the replication.+    or_policy_status_headers = {key: val for key, val in response.headers.items()+                                if key.startswith('x-ms-or') and key != 'x-ms-or-policy-id'}++    parsed_result = {}++    for key, val in or_policy_status_headers.items():+        policy_and_rule_ids = key[len('x-ms-or-'):].split('_')+        policy_id = policy_and_rule_ids[0]+        rule_id = policy_and_rule_ids[1]++        # we are seeing this policy for the first time, so a new rule_id -> result dict is needed+        if parsed_result.get(policy_id) is None:

Better yet:

try:
    parsed_result[policy_id][rule_id] = val
except KeyError:
    parsed_result[policy_id] = {rule_id: val}
zezha-msft

comment created time in 3 days

Pull request review commentAzure/azure-sdk-for-python

[Storage][Blob] Added support for Object Replication

 def deserialize_blob_properties(response, obj, headers):     return blob_properties  +def deserialize_ors_policies(response):+    # For source blobs (blobs that have policy ids and rule ids applied to them),+    # the header will be formatted as "x-ms-or-<policy_id>_<rule_id>: {Complete, Failed}".+    # The value of this header is the status of the replication.+    or_policy_status_headers = {key: val for key, val in response.headers.items()+                                if key.startswith('x-ms-or') and key != 'x-ms-or-policy-id'}++    parsed_result = {}++    for key, val in or_policy_status_headers.items():+        policy_and_rule_ids = key[len('x-ms-or-'):].split('_')

Also - is it possible for the policy or rule ID themselves to container underscores?

zezha-msft

comment created time in 3 days

Pull request review commentAzure/azure-sdk-for-python

[Storage][Blob] Added support for Object Replication

 def deserialize_blob_properties(response, obj, headers):     return blob_properties  +def deserialize_ors_policies(response):+    # For source blobs (blobs that have policy ids and rule ids applied to them),+    # the header will be formatted as "x-ms-or-<policy_id>_<rule_id>: {Complete, Failed}".+    # The value of this header is the status of the replication.+    or_policy_status_headers = {key: val for key, val in response.headers.items()+                                if key.startswith('x-ms-or') and key != 'x-ms-or-policy-id'}++    parsed_result = {}++    for key, val in or_policy_status_headers.items():+        policy_and_rule_ids = key[len('x-ms-or-'):].split('_')

Do we need the len operation? This could just be a constant value with a comment to state it's meaning

zezha-msft

comment created time in 3 days

PR opened Azure/azure-sdk

Reviewers
Update Python API review process
+1 -1

0 comment

1 changed file

pr created time in 3 days

create barnchAzure/azure-sdk

branch : annatisch-patch-1

created branch time in 3 days

Pull request review commentAzure/azure-sdk-for-python

Blob versioning

 class BlobSasPermissions(object):         destination of a copy operation within the same account.     :param bool delete:         Delete the blob.+    :param bool delete_version:+        Delete the blob version for the versioning enabled storage account.

Then I think we should rename this... maybe delete_previous_versions or something...

xiafu-msft

comment created time in 3 days

issue commentAzure/azure-sdk-for-python

Cosmos: RecordDiagnostics.request_charge is always 0

Thanks @shabbyrobe - I think a lib version issue seems unlikely - as I believe in that scenario the code snippet above wouldn't work at all and we would see more fundamental errors... Admittedly I did run my repro in Python 3.7, so I will try it again in 3.6 just to rule that out.

However I'm not an expert in the Cosmos service behaviour. @southpolesteve could this difference be related to how our Cosmos accounts/resources are configured?

shabbyrobe

comment created time in 3 days

issue commentAzure/azure-sdk-for-python

Cosmos: RecordDiagnostics.request_charge is always 0

Hi @shabbyrobe, I tried adding your query code to one of our tests:

container = create_multi_partition_collection_with_custom_pk_if_not_exist(self.client)
diag = RecordDiagnostics()
items = list(created_collection.query_items(
    'SELECT VALUE COUNT(1) FROM c',
    enable_cross_partition_query=True,
    populate_query_metrics=True,
    response_hook=diag,
))
assert len(items) > 0
assert diag.headers
assert diag.request_charge > 0

And this test is passing - ('x-ms-request-charge': '2.25')

@southpolesteve - could you please confirm the expected service behaviour of this header? Maybe I need to tweak my test to repro this?

shabbyrobe

comment created time in 4 days

issue commentAzure/azure-sdk-for-python

Cosmos: RecordDiagnostics.request_charge is always 0

Thanks @shabbyrobe - taking a look now.

shabbyrobe

comment created time in 4 days

Pull request review commentAzure/azure-sdk-for-python

Changefeed

 # Licensed under the MIT License. See License.txt in the project root for # license information. # --------------------------------------------------------------------------+__path__ = __import__('pkgutil').extend_path(__path__, __name__)  # type: str

We use this shared namespace pattern in other places that you can use as an example. Take a look at how they handle setup.py, directory structure, dev env set up etc

  • Event Hubs extensions. This package takes a hard dependency on the azure-eventhubs sdk and shares it's namespace. https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/eventhub/azure-eventhub-checkpointstoreblob
  • Telemetry extensions. This package takes a hard dependency on azure-core and shares it's namespace: https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/core/azure-core-tracing-opentelemetry

You can keep this line in your local environment if it helps you run tests locally etc - however please remove it from checked in code.

xiafu-msft

comment created time in 4 days

PR opened Azure/azure-sdk

Reviewers
Update Python release notes for cosmos
+10 -0

0 comment

1 changed file

pr created time in 4 days

create barnchAzure/azure-sdk

branch : annatisch-patch-1

created branch time in 4 days

Pull request review commentAzure/azure-sdk-for-python

[ServiceBus] Track2 - Dead Letter Queue Receiver Implementation and Dead Letter Fix

 async def dead_letter(  # type: ignore         """         # pylint: disable=protected-access         self._check_live(MESSAGE_DEAD_LETTER)-        await self._settle_message(MESSAGE_DEAD_LETTER)++        details = {+            RECEIVER_LINK_DEAD_LETTER_REASON: reason,+            RECEIVER_LINK_DEAD_LETTER_DESCRIPTION: description+        }++        await self._settle_message(MESSAGE_DEAD_LETTER, dead_letter_details=details)

Do we need to update async _settle_via_mgmt_link?

yunhaoling

comment created time in 4 days

Pull request review commentAzure/azure-sdk-for-python

[ServiceBus] Track2 - Dead Letter Queue Receiver Implementation and Dead Letter Fix

 def from_connection_string(         :keyword dict http_proxy: HTTP proxy settings. This must be a dictionary with the following          keys: `'proxy_hostname'` (str value) and `'proxy_port'` (int value).          Additionally the following keys may also be present: `'username', 'password'`.-        :rtype: ~azure.servicebus.ServiceBusReceiverClient+        :keyword bool is_dead_letter_receiver: Should this receiver connect to the dead-letter-queue associated

Is this public?

yunhaoling

comment created time in 4 days

Pull request review commentAzure/azure-sdk-for-python

Changefeed

 # Licensed under the MIT License. See License.txt in the project root for # license information. # --------------------------------------------------------------------------+__path__ = __import__('pkgutil').extend_path(__path__, __name__)  # type: str

Thanks @lmazuel! @xiafu-msft - lets remove this line and figure out how to resolve CI :)

xiafu-msft

comment created time in 5 days

PR opened Azure/azure-rest-api-specs

[Cosmos-Tables] swagger xml fixes

<i>MSFT employees can try out our new experience at <b>OpenAPI Hub </b> - one location for using our validation tools and finding your workflow. </i>

Contribution checklist:

If any further question about AME onboarding or validation tools, please view the FAQ.

+6 -1

0 comment

1 changed file

pr created time in 6 days

push eventannatisch/azure-rest-api-specs

antisch

commit sha 0435601ce7502cbc33547229f48752163f2fa883

Tables swagger fixes

view details

push time in 6 days

push eventannatisch/azure-rest-api-specs

Cameron Taggart

commit sha 7d82d40119ebcdcda9b520b81b6fe59bb1d94459

add Microsoft.AVS 2019-08-09-preview API spec (#9307) * cp -r vmwarevirtustream vmware * rebrand * add sku to PrivateCloud * add SSL thumbprints * Fix description typo (#5) * lists return value[] * suppress R3020 * fix suppress * add Locations_ prefix Co-authored-by: jspearman3 <spearmanjim@yahoo.com>

view details

azuresdkci

commit sha 3e8fb815e2e4f149fbfcdaab64efc4d45dcc0690

regenerated all-api-versions

view details

Praneeth Sanapathi

commit sha b34500631913573b9647456175db5476bc588aa6

Add 2020-04-01 API specification for Microsoft.Peering (#9361) * Remove readonly flag from microsoftSessionAddress properties in BgpSession; add an example for IxRs directPeeringType * Add 2020-04-01 API specification for Microsoft.Peering * Fix subId * Fix static validation issues * Fix GetPeeringReceivedRoutes example * Fix PeeringServiceSku * Add RPKI to custom-words * Update readme to include package-2020-04-01 * Fix input file path in package-2020-04-01 * Fix ErrorResponse data model Co-authored-by: Praneeth Sanapathi <prsanapa@microsoft.com>

view details

Ran Wang

commit sha ff7b07c9e9af02f5180a5b420244ee07845c2efb

Move added client name IdentityUserAssignedIdentitiesValue into an inner scope (#9363) * Bug fixes: skip url encoding for ScopeParameter; add client name for userAssignedIdentities. * Pretty check fix. * Move added client name into additionalProperties of userAssignedIdentities.

view details

Himanshu Chhabra

commit sha 00c3dd9476f25fe4480963b5c39b225fb69b15b0

Change operationID for table and queue, update the name of an example (#9410)

view details

JianyeXi

commit sha 09e96609dd7219f387db8a49c61d933b7930b49c

update version of rest-api-specs-scripts in PPE (#9420) Co-authored-by: Jianye Xi <jianyxi@microsoft.com>

view details

ayfathim

commit sha de2671e762c75017959570045e71ac2b06dc83ff

adding new properties (#9404)

view details

陈箭飞

commit sha d1abb64aea629316b984b4155314478f81fd5eba

Pipeline devops (#9374) * netapp trenton pipeline * logic pipeline test * Revert "logic pipeline test" This reverts commit ad9f1f00d99379de1e461fcbb6c803a491b5bd29. * devops pipeline test * Update readme.md Co-authored-by: root <root@cjf.1nfgrxx31qve3h2ogd1kz2gjvd.cx.internal.cloudapp.net>

view details

Ruoxuan Wang

commit sha 260aa3d5e9467700068787493861f92fc590800c

typo (#9210)

view details

Vivian Liu

commit sha 84485c7ab344add208363d6f1f554177d8447fc3

IoT Edge in Central APIs initial (#9248) * IoT Edge in Central APIs initial * making recommended fixes

view details

Allen Zhang

commit sha dbaadb7c436ed619a5cfb4d6f5ce367616feacc3

Moved storage private link related models into common types (#8935) * Moved private link related model into common types * Fix path to common types.json * Fix parameter reference * fix prettier issue * Resovled the conflict and moved PrivateEndpointConnectionListResult into common and updated reference

view details

Ashraf Hamad

commit sha 1fb8d3e3b25df6066ed267b2394c544fe18c9977

EventGrid: Update 2020-04-01-preview swagger to include new properties per customer's feedback (#9402) * Update 2020-04-01-preview swagger to include new properties per customer's feedback * fix spelling * add readonly flag for readiness state * fix prettier Co-authored-by: Ashraf Hamad <ahamad@ntdev.microsoft.com>

view details

Feiyu Shi

commit sha 819d7a228d61edb42a5d6f7e4cf30d46681a1ce5

[Hub Generated] Review request for Microsoft.VirtualMachineImages to add version stable/2020-02-14 (#9411)

view details

Vishnu Priya Ananthu Sundaram

commit sha 64ecd4c6e31fd89a522388d7d32717853537f9b5

[Azure Stack] Updates to Azs.Storage.Admin spec (#8959) * commit 1ef81911c39d8f618bf6ec097223ff5e8961b3cb Author: Yuxing Zhou <zyx.pulsars@gmail.com> Date: Thu Feb 6 09:13:52 2020 +0800 [Azure Stack] Update storage admin specs for new generation with autorest-beta (#8306) * Update storage admin specs for new generation with autorest-beta * fix code style commit 9e551f0eab4057d4c2f54c333c7aa2a1a564c125 Author: bganapa <bganapa@microsoft.com> Date: Tue Nov 12 11:44:22 2019 -0800 Reset to Stackadmin2 (#7766) * Fix resourcegroup case * Fix old version * Address PR feedback

view details

陈箭飞

commit sha 35f130da66f057e0826452ef6567dae31350cf28

Customerprovider pipeline2[DONOT MERGE] (#9428) * customer provider * add flag

view details

Yoram Singer

commit sha 49a601db21c4b40040899a214e675162a08bee1f

Add DeletedWorkspaces API (#9401)

view details

Yoram Singer

commit sha be74fb6a70417cedfd1034c84181f9a056c61da8

Add force flag (#9417) * Add force flag * Fix

view details

Priyaranjan Pandey

commit sha 79b540a8add1c4241c3318662fd7a16cadd8ee96

[Storage] Adding support for listing soft deleted blob containers (#9435) * [Storage] Adding support for listing soft deleted blob containers . Adding swagger support for listing soft deleted blob containers . Added examples as well * Update blob.json * Update DeletedBlobContainersList.json Prettier check failing

view details

Xiangyu Luo

commit sha 235d00185b749027fd5c29e3aa8fe6622cc60be7

Create a New Default Tag for Code Generation (#9440) Include both Spatial Anchors and Remote Rendering in the same package.

view details

azuresdkci

commit sha fc0ea12e0a984b4e8fecaf25ff2332f50b19908d

regenerated all-api-versions

view details

push time in 6 days

Pull request review commentAzure/autorest.python

LRO Continuation Token [need azure-core 1.6.0]

 def get_long_running_output(pipeline_response):         if polling is True: polling_method = AsyncARMPolling(lro_delay,  **kwargs)         elif polling is False: polling_method = AsyncNoPolling()         else: polling_method = polling-        return await async_poller(self._client, raw_result, get_long_running_output, polling_method)

What did we used to be "awaiting" here?

lmazuel

comment created time in 11 days

Pull request review commentAzure/autorest.python

LRO Continuation Token [need azure-core 1.6.0]

 async def put201_creating_succeeded200(             'polling_interval',             self._config.polling_interval         )-        raw_result = await self._put201_creating_succeeded200_initial(-            product=product,-            cls=lambda x,y,z: x,-            **kwargs-        )+        cont_token = kwargs.pop('continuation_token', None)  # type: Optional[str]+        if cont_token is None:

So in the "resume" scenario, nothing will be awaited in this function - right?

lmazuel

comment created time in 11 days

Pull request review commentAzure/autorest.python

LRO Continuation Token

 async def put_async_retry_succeeded(         :param product: Product to put.         :type product: ~lro.models.Product         :keyword callable cls: A custom type or function that will be passed the direct response+        :keyword str continuation_token: A continuation token to restart a poller from a saved state

I think our codegen templates should add trailing "." to the end of kwarg descriptions.

lmazuel

comment created time in 11 days

Pull request review commentAzure/azure-sdk-for-python

LRO continuation_token

 async def async_poller(client, initial_response, deserialization_callback, polli     :param polling_method: The polling strategy to adopt     :type polling_method: ~azure.core.polling.PollingMethod     """+    poller = AsyncLROPoller(client, initial_response, deserialization_callback, polling_method)+    return await poller -    # This implicit test avoids bringing in an explicit dependency on Model directly-    try:-        deserialization_callback = deserialization_callback.deserialize-    except AttributeError:-        pass -    # Might raise a CloudError-    polling_method.initialize(client, initial_response, deserialization_callback)+class AsyncLROPoller(Awaitable, Generic[PollingReturnType]):+    """Async poller for long running operations.++    :param client: A pipeline service client+    :type client: ~azure.core.PipelineClient+    :param initial_response: The initial call response+    :type initial_response:+     ~azure.core.pipeline.transport.HttpResponse or ~azure.core.pipeline.transport.AsyncHttpResponse+    :param deserialization_callback: A callback that takes a Response and return a deserialized object.+                                     If a subclass of Model is given, this passes "deserialize" as callback.+    :type deserialization_callback: callable or msrest.serialization.Model+    :param polling_method: The polling strategy to adopt+    :type polling_method: ~azure.core.polling.AsyncPollingMethod+    """++    def __init__(+            self,+            client: Any,+            initial_response: Any,+            deserialization_callback: Callable,+            polling_method: AsyncPollingMethod[PollingReturnType]+        ):+        self._polling_method = polling_method+        self._done = False++        # This implicit test avoids bringing in an explicit dependency on Model directly+        try:+            deserialization_callback = deserialization_callback.deserialize # type: ignore+        except AttributeError:+            pass++        self._polling_method.initialize(client, initial_response, deserialization_callback)++    def continuation_token(self) -> str:+        """Return a continuation token that allows to restart the poller later.++        :returns: An opaque continuation token+        :rtype: str+        """+        return self._polling_method.get_continuation_token()++    @classmethod+    def from_continuation_token(+            cls,+            polling_method: AsyncPollingMethod[PollingReturnType],+            continuation_token: str,+            **kwargs+        ) -> "AsyncLROPoller[PollingReturnType]":+        client, initial_response, deserialization_callback = polling_method.from_continuation_token(+            continuation_token, **kwargs+        )+        return cls(client, initial_response, deserialization_callback, polling_method) -    await polling_method.run()-    return polling_method.resource()+    def status(self) -> str:+        """Returns the current status string.++        :returns: The current status string+        :rtype: str+        """+        return self._polling_method.status()

Not necessary for this PR - but it would be good if we could open up the contract regarding the expected return type status in future.

lmazuel

comment created time in 11 days

Pull request review commentAzure/azure-sdk-for-python

LRO continuation_token

 def initialize(self, client, initial_response, deserialization_callback):         except OperationFailed as err:             raise HttpResponseError(response=initial_response.http_response, error=err) +    def get_continuation_token(self):+        # type() -> str+        import pickle+        return base64.b64encode(pickle.dumps(self._initial_response)).decode('ascii')++    @classmethod+    def from_continuation_token(cls, continuation_token, **kwargs):+        # type(str, Any) -> Tuple+        try:+            client = kwargs["client"]+        except KeyError:+            raise ValueError("Need kwarg 'client' to be recreated from continuation_token")++        try:+            deserialization_callback = kwargs["deserialization_callback"]+        except KeyError:+            raise ValueError("Need kwarg 'deserialization_callback' to be recreated from continuation_token")++        import pickle

Is importing pickle really slow? Just wondering why we've not added this at the top

lmazuel

comment created time in 11 days

Pull request review commentAzure/azure-sdk-for-python

LRO continuation_token

+#--------------------------------------------------------------------------+#+# Copyright (c) Microsoft Corporation. All rights reserved.+#+# The MIT License (MIT)+#+# Permission is hereby granted, free of charge, to any person obtaining a copy+# of this software and associated documentation files (the ""Software""), to deal+# in the Software without restriction, including without limitation the rights+# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell+# copies of the Software, and to permit persons to whom the Software is+# furnished to do so, subject to the following conditions:+#+# The above copyright notice and this permission notice shall be included in+# all copies or substantial portions of the Software.+#+# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR+# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,+# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE+# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER+# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN+# THE SOFTWARE.+#+#--------------------------------------------------------------------------+import time+try:+    from unittest import mock+except ImportError:+    import mock++import pytest++from azure.core import AsyncPipelineClient+from azure.core.polling import *+from msrest.serialization import Model+++@pytest.fixture+def client():+    # The poller itself don't use it, so we don't need something functionnal+    return AsyncPipelineClient("https://baseurl")+++@pytest.mark.asyncio+async def test_no_polling(client):+    no_polling = AsyncNoPolling()++    initial_response = "initial response"+    def deserialization_cb(response):+        assert response == initial_response+        return "Treated: "+response++    no_polling.initialize(client, initial_response, deserialization_cb)+    await no_polling.run() # Should no raise and do nothing+    assert no_polling.status() == "succeeded"+    assert no_polling.finished()+    assert no_polling.resource() == "Treated: "+initial_response++    continuation_token = no_polling.get_continuation_token()+    assert isinstance(continuation_token, str)++    no_polling_revived_args = NoPolling.from_continuation_token(+        continuation_token,+        deserialization_callback=deserialization_cb,+        client=client+    )+    no_polling_revived = NoPolling()+    no_polling_revived.initialize(*no_polling_revived_args)+    assert no_polling_revived.status() == "succeeded"+    assert no_polling_revived.finished()+    assert no_polling_revived.resource() == "Treated: "+initial_response+++class PollingTwoSteps(AsyncPollingMethod):+    """An empty poller that returns the deserialized initial response.+    """+    def __init__(self, sleep=0):+        self._initial_response = None+        self._deserialization_callback = None+        self._sleep = sleep++    def initialize(self, _, initial_response, deserialization_callback):+        self._initial_response = initial_response+        self._deserialization_callback = deserialization_callback+        self._finished = False++    async def run(self):+        """Empty run, no polling.+        """+        self._finished = True++    def status(self):+        """Return the current status as a string.+        :rtype: str+        """+        return "succeeded" if self._finished else "running"++    def finished(self):+        """Is this polling finished?+        :rtype: bool+        """+        return self._finished++    def resource(self):+        return self._deserialization_callback(self._initial_response)++    def get_continuation_token(self):+        return self._initial_response++    @classmethod+    def from_continuation_token(cls, continuation_token, **kwargs):+        # type(str, Any) -> Tuple+        initial_response = continuation_token+        deserialization_callback = kwargs['deserialization_callback']+        return None, initial_response, deserialization_callback+++@pytest.mark.asyncio+async def test_poller(client):++    # Same the poller itself doesn't care about the initial_response, and there is no type constraint here+    initial_response = "Initial response"++    # Same for deserialization_callback, just pass to the polling_method+    def deserialization_callback(response):+        assert response == initial_response+        return "Treated: "+response++    method = AsyncNoPolling()++    poller = AsyncLROPoller(client, initial_response, deserialization_callback, method)++    result = await poller.result()+    assert poller.done()+    assert result == "Treated: "+initial_response+    assert poller.status() == "succeeded"++    # Test with a basic Model+    poller = AsyncLROPoller(client, initial_response, Model, method)+    assert poller._polling_method._deserialization_callback == Model.deserialize++    # Test poller that method do a run+    method = PollingTwoSteps(sleep=1)+    poller = AsyncLROPoller(client, initial_response, deserialization_callback, method)++    result = await poller.result()+    assert result == "Treated: "+initial_response+    assert poller.status() == "succeeded"++    # Test continuation token+    cont_token = poller.continuation_token()++    method = PollingTwoSteps(sleep=1)+    new_poller = AsyncLROPoller.from_continuation_token(+        continuation_token=cont_token,+        client=client,+        initial_response=initial_response,+        deserialization_callback=Model,+        polling_method=method+    )+    result = await poller.result()+    assert result == "Treated: "+initial_response+    assert poller.status() == "succeeded"+

Do we want to add a test for wrapping the poller in an asyncio Task to try cancellation, done_callbacks etc? Could also do this for the other support async eventloops.

lmazuel

comment created time in 11 days

issue openedAzure/azure-sdk-for-python

[Tables] Set up Tables test suite

Start moving the test suite ported from the previous SDK: https://github.com/Azure/azure-cosmos-table-python/tree/master/azure-cosmosdb-table/tests

created time in 11 days

push eventannatisch/azure-sdk-for-python

Laurent Mazuel

commit sha 09e6034a2d908935a32aaf6737fdab88c8690243

Make x-ms-request-id safe whitelist headers (#10967) * Make x-ms-request-id safe whitelist headers * ChangeLog

view details

Krista Pratico

commit sha b3cfe1a6ad40a6ef5ddfdaf4b85337e6d280f13b

[formrecognizer] fix name in setup and some formatting (#10970) * fix name in setup * fix for github.io readme formatting/references

view details

Bryan Van de Ven

commit sha a42c806218d99c4e31175a0d12d6f6835c6113ae

Implement search Skillsets operation (#10892) * checkpoint * sort top level imports/all * Return real Skillset models * allow passing Skillset object for create_or_update * list_skillsets -> get_skillsets * list_indexes -> get_indexes * list_synonym_maps -> get_synonym_maps

view details

Rakshith Bhyravabhotla

commit sha bd99ea5059c3a90c45b051c4113de03aeb73a8ae

Add attributes in method AbstractSpan.link(cls, traceparent) (#10906) * Initial Commit * test + log + lint * comments * oops * fix test

view details

Zim Kalinowski

commit sha 492eb8c87c59ff76f5ab5999abb0a17c35d0fb8f

version and changelog (#10951) * version and changelog * Packaging update of azure-mgmt-recoveryservices Co-authored-by: Azure SDK Bot <aspysdk2@microsoft.com>

view details

Qiaoqiao Zhang

commit sha f55627d7cf5aa80f64fbaee18abdd792eb837b31

release-netapp-mgmt-sdk (#10955) * release-netapp-mgmt-sdk * Update CHANGELOG.md * Update version.py * Packaging update of azure-mgmt-netapp Co-authored-by: Zim Kalinowski <zikalino@microsoft.com> Co-authored-by: Azure SDK Bot <aspysdk2@microsoft.com>

view details

Qiaoqiao Zhang

commit sha 4d52b67bcba2e5e081ac14979e096cdccf62e980

release-for-mgmt-loganalytics (#10990) * release-for-mgmt-loganalytics * change log

view details

iscai-msft

commit sha 9e8a272b5b48d354b941e32fe301032a830b7f6a

fixed ClientSecretCredential initialization (#11000)

view details

Xiaoxi Fu

commit sha c740ce25c93a22f8976659896cd5dd34101deba4

[DataLake][Bug]Upload is not working with umask and permissions (#10845)

view details

Laurent Mazuel

commit sha d5508e62fa542888914e88394973a2dd9e38a617

Add force generation to SwaggerToSdk (#10933)

view details

Krista Pratico

commit sha fd77736ec87af50a108f461f1e78ad11314942ee

[formrecognizer] edits to docstrings (#11003) * edits to docstrings * correct date

view details

Zim Kalinowski

commit sha 269715f1c64311ff165238f111dcec6d92dc2c47

updating setup template (#11022)

view details

Krista Pratico

commit sha a77ebda38073969606763650e9970489c55c9402

[formrecognizer] handle unsupervised pages better with service bug (#11017) * handle unsupervised pages better * python 2 oops

view details

Azure SDK Bot

commit sha 0013ab66cb1762df667ced7994d42e149da3d278

Increment package version after release of azure_ai_formrecognizer (#11026)

view details

iscai-msft

commit sha f3b5e44bb526c32103b5424c4fee6b9bfc05aa0a

add regular endpoint in new env variable (#11031)

view details

Scott Beddall

commit sha 2d1bd2bca9a843b10c722992c3bad81d2bb6e225

Update Ubuntu VM Image to 18.04 (#11032) * updating the VM image, need to update the recording as well. * update release

view details

Zim Kalinowski

commit sha f0c3be4b74ee65012e3e5a31280cde47e9fc1d1c

fixing merge error (#11039)

view details

Zim Kalinowski

commit sha ace5cc1cc18a38b12e5aacf4f46b4919cf970c36

Fixing compute test (#11036) * trigger test * Packaging update of azure-mgmt-compute * fix test * fix * fix duplicated comment Co-authored-by: Azure SDK Bot <aspysdk2@microsoft.com>

view details

KieranBrantnerMagee

commit sha d26c102bae730b207ba92b89345d8033f2530feb

Servicebus - Track2 - Remove timeout from Send (#11002) * With retry options available, send should no longer require its own timeout. Removes the parameter from sync and async clients, adds a note to changelog about the delta.

view details

Bryan Van de Ven

commit sha a4f47e17e7abf14c9a54789d108ea26c7ca1cea1

rename SearchIndexClient -> SearchClient (#10964)

view details

push time in 12 days

issue closedAzure/azure-sdk-for-python

[Core] Support changesets in multipart requests

A feature of Odata batch request formatting is changesets. This is one or more nested boundaries within the batch as documented here: http://docs.oasis-open.org/odata/odata/v4.0/errata03/os/complete/part1-protocol/odata-v4.0-errata03-os-part1-protocol-complete.html#_Toc453752316

This is a requirement to support Azure Storage/Cosmos Tables API.

closed time in 12 days

annatisch

push eventAzure/azure-sdk-for-python

annatisch

commit sha cb52715da120c495a9fbd640fd91931748f9bacf

[cosmos] readme review feedback (#11527) * readme review feedback * Removed extra section

view details

push time in 13 days

Pull request review commentAzure/azure-sdk-for-python

Add readme deprecation info for track1

  ## 0.50.3 (unreleased) +> **NOTE**: Starting with the GA release of version 7.0.0, (Currently in preview) this package will be deprecated.

In the Cosmos notes we changed "GA" -> "stable", as we weren't whether "GA" was too much of a corporate acronym...but up to you on this one.

KieranBrantnerMagee

comment created time in 13 days

push eventAzure/azure-sdk-for-python

annatisch

commit sha dfe5b0957c1af756e125ad625cfd9834b43cdc20

[Core] Support multipart changesets (#10972) * Support multipart pipeline context * Support sending multipart changesets * Added receive tests * Fix pylint + mypy * Update to use recursive requests * CI fix * Update changeset response decoding * Make mypy happy

view details

push time in 13 days

PR merged Azure/azure-sdk-for-python

Reviewers
[Core] Support multipart changesets

Add support for the Odata changeset structure in multipart batch requests. This is needed for the Tables service. #10485

http://docs.oasis-open.org/odata/odata/v4.0/errata03/os/complete/part1-protocol/odata-v4.0-errata03-os-part1-protocol-complete.html#_Toc453752316

+1375 -108

3 comments

5 changed files

annatisch

pr closed time in 13 days

pull request commentAzure/azure-sdk-for-python

[Core] Support multipart changesets

/azp run python - storage - tests

annatisch

comment created time in 13 days

Pull request review commentAzure/azure-sdk-for-python

[Core] Support multipart changesets

 def prepare_multipart_body(self):         main_message.add_header("Content-Type", "multipart/mixed")         if boundary:             main_message.set_boundary(boundary)-        for i, req in enumerate(requests):++        for req in requests:             part_message = Message()-            part_message.add_header("Content-Type", "application/http")-            part_message.add_header("Content-Transfer-Encoding", "binary")-            part_message.add_header("Content-ID", str(i))-            part_message.set_payload(req.serialize())+            if req.multipart_mixed_info:+                content_index = req.prepare_multipart_body(content_index=content_index)+                part_message.add_header("Content-Type", req.headers['Content-Type'])

Actually Content-Type is the one header that must be present for both parent changeset messages, and leaf sub-changeset messages. This is needed because the changeset message must declare the changeset boundary definition. You can see the results in the test here: Changeset content-type: https://github.com/Azure/azure-sdk-for-python/pull/10972/files#diff-102c265bbd4559f55d45afbd5dae4473R366

Leaf message content-type: https://github.com/Azure/azure-sdk-for-python/pull/10972/files#diff-102c265bbd4559f55d45afbd5dae4473R369

annatisch

comment created time in 13 days

PR opened Azure/azure-sdk-for-python

Reviewers
[cosmos] readme review feedback
+47 -52

0 comment

2 changed files

pr created time in 13 days

push eventannatisch/azure-sdk-for-python

antisch

commit sha 5391083f36a68a61c99666e0de1197cf11efd875

Removed extra section

view details

push time in 13 days

push eventannatisch/azure-sdk-for-python

iscai-msft

commit sha 0160912a4dd0f027a533d33f91f1a73afea41034

[text analytics] Update ta tests (#11461)

view details

Krista Pratico

commit sha 146bc864c8917f62ebe8325a1098d0b179a5069c

[formrecognizer] consistency on handling LRO's with failed status (#11445) * samples handle invalid models from training * update to throw exception on training methods that return invalid model * update tests now that we treat invalid status differently * pass response to error

view details

Charles Lowell

commit sha 74f4fd3b44c1e39edb61d65debdcd1d7d28b3deb

Separate modules for client credential types (#11496)

view details

openapi-sdkautomation[bot]

commit sha 99668db644fe606c24cc7cb553135b6614c10ffd

[ReleasePR azure-cognitiveservices-vision-computervision] [Cognitive Service Computer Vision] Create CV v3.0 API version swagger (#11464) * Generated from 6c2e36f271e8bd30f4ec2ba3c79890bd441feed2 Run Prettier script on new examples * ChangeLog * Udpate Readme Co-authored-by: SDK Automation <sdkautomation@microsoft.com> Co-authored-by: Laurent Mazuel <laurent.mazuel@gmail.com>

view details

Daniel Jurek

commit sha 2f2ed373b82680c54d1a00a0d554af0ab42e94d7

update parameters to use SubscriptionConfiguration (#11425)

view details

Scott Beddall

commit sha 6e12533eb012eb65e24bbe9702bdb8829a804bf5

update artifactname to ensure that wheel get's picked up properly (#11502)

view details

annatisch

commit sha 00779346c3b910dd419207d8918148ec43623f2b

[Cosmos] GA release prep (#11468) * Version bump * Update classifier * Update readme URLs * Update samples readme * One more URL * Changelog feedback

view details

Krista Pratico

commit sha 88fae6ec703ac3641459b5d07158c9ed710524fe

[formrecognizer] update docs to specify encoded url input (#11471) * update docs to specify encoded url input * fix receipt * add back missing types

view details

antisch

commit sha 64a8143f5b8616d07203a00f5e2274e673d56c65

readme review feedback

view details

push time in 13 days

push eventAzure/azure-sdk-for-python

annatisch

commit sha 00779346c3b910dd419207d8918148ec43623f2b

[Cosmos] GA release prep (#11468) * Version bump * Update classifier * Update readme URLs * Update samples readme * One more URL * Changelog feedback

view details

push time in 14 days

PR merged Azure/azure-sdk-for-python

Reviewers
[Cosmos] GA release prep
  • [x] Validated sdist build with live tests in Python 2.7, 3.5, 3.8
  • [x] Validated wheel build with live tests in Python 2.7, 3.5, 3.8
  • [x] Validated package version
  • [x] Validated User-Agent value
  • [x] Validated updated release notes
  • [x] Validated all readme URLs
  • [x] Validated setup classifiers
  • [x] Validated dependencies
+21 -24

4 comments

6 changed files

annatisch

pr closed time in 14 days

push eventannatisch/azure-sdk-for-python

antisch

commit sha 57ec261e8935c8e45f6551aa04a11d02e1ce2542

Changelog feedback

view details

push time in 14 days

Pull request review commentAzure/azure-sdk-for-python

Changefeed

+#!/usr/bin/env python++# -------------------------------------------------------------------------+# Copyright (c) Microsoft Corporation. All rights reserved.+# Licensed under the MIT License. See License.txt in the project root for+# license information.+# --------------------------------------------------------------------------+++import os+import re++from setuptools import setup, find_packages+++# Change the PACKAGE_NAME only to change folder and different name+PACKAGE_NAME = "azure-storage-blob-changefeed"+NAMESPACE_NAME = "azure.storage.blobchangefeed"

In this case we shouldn't need a namespace package because we have a dependency on blobs which is the namespace.

xiafu-msft

comment created time in 14 days

push eventannatisch/azure-sdk-for-python

Azure SDK Bot

commit sha 167264dea06acb9f8be1ad67440e7f17edb454e5

Sync eng/common directory with azure-sdk-tools repository (#11469)

view details

Daniel Jurek

commit sha dc99329f59447df61bcd3f4c657b371cc102e622

update CODEOWNERS with smoke test owners (#11404) * update CODEOWNERS with smoke test owners * add @southpolesteve for cosmos failure notifications

view details

Yijun Xie

commit sha 91de6c7df3fe84a90cbb28a88469e58a74a03407

[Service Bus] Enable pylint and mypy (#11316)

view details

Wei Dong

commit sha a5e144795e0d6fbdcafd27c346e33a31842b8722

add ci to azure-mgmt-eventhub (#11459)

view details

Wei Dong

commit sha 4e48ee5871d806c63b5db005e3910ee43beecd12

Release azure mgmt hybridkubernetes (#11483) * Generated from 3e3c09cb2ebcd9b902007bc0a344c3b71a10afe8 updated name * initial release * initial release * Packaging update of azure-mgmt-hybridkubernetes * Update version.py Co-authored-by: SDK Automation <sdkautomation@microsoft.com> Co-authored-by: Azure SDK Bot <aspysdk2@microsoft.com> Co-authored-by: Zim Kalinowski <zikalino@microsoft.com>

view details

antisch

commit sha 17ee1ffbf183e0d95b61e3cdd23cea247fcb3c45

Merge remote-tracking branch 'upstream/master' into cosmos-ga

view details

antisch

commit sha b93a4f9bc59e1aac8f310e668fada4bc35ad2d09

One more URL

view details

push time in 14 days

pull request commentAzure/azure-sdk-for-python

[Cosmos] GA release prep

/azp run python - cosmos - tests

annatisch

comment created time in 14 days

pull request commentAzure/azure-sdk-for-python

[Cosmos] GA release prep

/azp run python - cosmos -tests

annatisch

comment created time in 14 days

push eventannatisch/azure-sdk-for-python

antisch

commit sha fbc5ed6b9cf42f878fd859667e49b27d01efc097

Update samples readme

view details

push time in 17 days

push eventannatisch/azure-sdk-for-python

antisch

commit sha 07162ef5fc3893b6db2c37187ce6cb8b51a27625

Update classifier

view details

antisch

commit sha 9429406da904cae7a81c62f56e971a10c6b13b3d

Update readme URLs

view details

push time in 17 days

create barnchannatisch/azure-sdk-for-python

branch : cosmos-ga

created branch time in 17 days

Pull request review commentAzure/azure-uamqp-python

Cbs auth and management support

 # license information. #-------------------------------------------------------------------------- -import threading-import struct-import uuid import logging-import time-from urllib.parse import urlparse-from enum import Enum-from io import BytesIO+from collections import namedtuple -from .endpoints import Source, Target+from .sender import SenderLink+from .receiver import ReceiverLink from .constants import (-    DEFAULT_LINK_CREDIT,-    SessionState,-    SessionTransferState,     ManagementLinkState,-    LinkDeliverySettleReason,     LinkState,-    Role,     SenderSettleMode,-    ReceiverSettleMode-)-from .performatives import (-    AttachFrame,-    DetachFrame,-    TransferFrame,-    DispositionFrame,-    FlowFrame,+    ReceiverSettleMode,+    ManagementExecuteOperationResult,+    ManagementOpenResult,+    SEND_DISPOSITION_REJECT )  _LOGGER = logging.getLogger(__name__) +PendingMgmtOperation = namedtuple('PendingMgmtOperation', ['message', 'on_execute_operation_complete'])+  class ManagementLink(object):     """      """-    def __init__(self, session, endpoint, **kwargs):+    def __init__(+            self,+            session,+            endpoint,+            status_code_field=b'statusCode',+            status_description_field=b'statusDescription',+            **kwargs+    ):         self.next_message_id = 0         self.state = ManagementLinkState.IDLE         self._pending_operations = []         self._session = session-        self._request_link = session.create_sender_link(-            endpoint, on_link_state_change=self._on_sender_state_change)-        self._response_link = session.create_receiver_link(-            endpoint, on_link_state_change=self._on_receiver_state_change)-        self._on_mgmt_error = kwargs.get('on_mgmt_error')+        self._request_link = session.create_sender_link(  # type: SenderLink+            endpoint,+            on_link_state_change=self._on_sender_state_change,+            send_settle_mode=SenderSettleMode.Unsettled,+            rcv_settle_mode=ReceiverSettleMode.First+        )+        self._response_link = session.create_receiver_link(  # type: ReceiverLink+            endpoint,+            on_link_state_change=self._on_receiver_state_change,+            on_message_received=self._on_message_received,+            send_settle_mode=SenderSettleMode.Unsettled,+            rcv_settle_mode=ReceiverSettleMode.First+        )+        self._on_amqp_management_error = kwargs.get('on_amqp_management_error')+        self._on_amqp_management_open_complete = kwargs.get('on_amqp_management_open_complete')++        self._status_code_field = status_code_field+        self._status_description_field = status_description_field++        self._sender_connected = False+        self._receiver_connected = False      def __enter__(self):         self.open()         return self          def __exit__(self, *args):         self.close()-    -    def _set_state(self, new_state):-        previous_state = self.state-        self.state = new_state-        if new_state == ManagementLinkState.ERROR and self._on_mgmt_error:-            self._on_mgmt_error()      def _on_sender_state_change(self, previous_state, new_state):+        _LOGGER.info("Management link sender state changed: %r -> %r", previous_state, new_state)         if new_state == previous_state:             return-        #if self.state == ManagementLinkState.OPENING:-        #    if new_state == LinkState.OPENING:-        #elif self.state == ManagementLinkState.OPEN:+        if self.state == ManagementLinkState.OPENING:+            if new_state == LinkState.ATTACHED:+                self._sender_connected = True+                if self._receiver_connected:+                    self.state = ManagementLinkState.OPEN+                    self._on_amqp_management_open_complete(ManagementOpenResult.OK)+            elif new_state in [LinkState.DETACHED, LinkState.DETACH_SENT, LinkState.DETACH_RCVD, LinkState.ERROR]:+                self.state = ManagementLinkState.IDLE+                self._on_amqp_management_open_complete(ManagementOpenResult.ERROR)+        elif self.state == ManagementLinkState.OPEN:+            if new_state is not LinkState.ATTACHED:+                self.state = ManagementLinkState.ERROR+                self._on_amqp_management_error()         elif self.state == ManagementLinkState.CLOSING:-            if new_state not in [LinkState.DETACHED, LinkState.DETACH_RCVD]:-                self._set_state(ManagementLinkState.ERROR)+            if new_state not in [LinkState.DETACHED, LinkState.DETACH_SENT, LinkState.DETACH_RCVD]:

DETACH_SENT/DETACH_RCVD don't actually work yet..... The C implementation only had one state to represent these two - which is how I started - but then though I might introduce this as two separate states. No need to change this now - just a headsup

yunhaoling

comment created time in 17 days

Pull request review commentAzure/azure-uamqp-python

Cbs auth and management support

 # license information. #-------------------------------------------------------------------------- -import threading-import struct-import uuid import logging-import time-from urllib.parse import urlparse-from enum import Enum-from io import BytesIO+from collections import namedtuple -from .endpoints import Source, Target+from .sender import SenderLink+from .receiver import ReceiverLink from .constants import (-    DEFAULT_LINK_CREDIT,-    SessionState,-    SessionTransferState,     ManagementLinkState,-    LinkDeliverySettleReason,     LinkState,-    Role,     SenderSettleMode,-    ReceiverSettleMode-)-from .performatives import (-    AttachFrame,-    DetachFrame,-    TransferFrame,-    DispositionFrame,-    FlowFrame,+    ReceiverSettleMode,+    ManagementExecuteOperationResult,+    ManagementOpenResult,+    SEND_DISPOSITION_REJECT )  _LOGGER = logging.getLogger(__name__) +PendingMgmtOperation = namedtuple('PendingMgmtOperation', ['message', 'on_execute_operation_complete'])+  class ManagementLink(object):     """      """-    def __init__(self, session, endpoint, **kwargs):+    def __init__(+            self,+            session,+            endpoint,+            status_code_field=b'statusCode',+            status_description_field=b'statusDescription',

These two should be moved into kwargs. They only exist to account for a bug in the AMQP spec/service implementation

yunhaoling

comment created time in 17 days

Pull request review commentAzure/azure-uamqp-python

Cbs auth and management support

 def create_request_response_link_pair(self, endpoint, **kwargs):             network_trace=kwargs.pop('network_trace', self.network_trace),             **kwargs) +    def mgmt_request(self, message, operation=None, operation_type=None, node='$management', **kwargs):

I would rather not have this function. I think Management link opening should be handled the same way link opening in handled, then requests should be passed straight to the management link in the same was that outgoing messages are passed to the link. We could put logic like this into the clients though. As they are a higher-level implementation.

yunhaoling

comment created time in 17 days

Pull request review commentAzure/azure-uamqp-python

Cbs auth and management support

 def _outgoing_disposition(self, frame):      def _incoming_disposition(self, frame):         for link in self._input_handles.values():-            link._incoming_disposition(frame)+            if hasattr(link, '_incoming_disposition'):

Rather than do this check - we should just add an empty _incoming_disposition function to the base link class that just does nothing. I think that might be more efficient than the hasattr check

yunhaoling

comment created time in 17 days

issue commentAzure/azure-sdk-for-python

[azure-cosmos] Distributed tracing: wrong parent-child relationship between spans and duration issues

This appears to be related to how spans are handled in azure-core pageables. There is ongoing discussion on the best approach to this topic. @southpolesteve - no changes will need to introduced into azure-cosmos SDK for this for the time being. It's likely that if a fix is needed, that it will go into azure-core.

lmolkova

comment created time in 17 days

pull request commentAzure/azure-uamqp-python

Support for connection desired capabilities

This patch looks good too - can we add a test using the amqp:link:redirect mentioned above?

yunhaoling

comment created time in 18 days

pull request commentAzure/azure-uamqp-python

Connection idle timeout patch

@yunhaoling - this patch looks good to me - is there any way we can add a test for it?

yunhaoling

comment created time in 18 days

push eventannatisch/azure-uamqp-python

annatisch

commit sha cae3c7e0758b8f0baf7c5b4f6211535a114a79a9

Removing feedback monitor

view details

push time in 18 days

Pull request review commentAzure/azure-sdk-for-python

Changefeed

+# -------------------------------------------------------------------------+# Copyright (c) Microsoft Corporation. All rights reserved.+# Licensed under the MIT License. See License.txt in the project root for+# license information.+# --------------------------------------------------------------------------++_SERVICE_PARAMS = {+    "blob": {"primary": "BlobEndpoint", "secondary": "BlobSecondaryEndpoint"},+}+++def parse_connection_str(conn_str, credential, service):

We already have a dependency on Storage-Blobs being installed - maybe we could just call into that?

xiafu-msft

comment created time in 18 days

Pull request review commentAzure/azure-sdk-for-python

Changefeed

+#!/usr/bin/env python++# -------------------------------------------------------------------------+# Copyright (c) Microsoft Corporation. All rights reserved.+# Licensed under the MIT License. See License.txt in the project root for+# license information.+# --------------------------------------------------------------------------+++import os+import re++from setuptools import setup, find_packages+++# Change the PACKAGE_NAME only to change folder and different name+PACKAGE_NAME = "azure-storage-blob-changefeed"+NAMESPACE_NAME = "azure.storage.blobchangefeed"

I feel like the namespace should also be "azure.storage.blob.changefeed" Is there a reason that it isn't? I'll discuss it with Johan as well.

xiafu-msft

comment created time in 18 days

Pull request review commentAzure/azure-sdk-for-python

Blob versioning

 class AccountSasPermissions(object):     :param bool process:         Valid for the following Object resource type only: queue messages.     """-    def __init__(self, read=False, write=False, delete=False, list=False,  # pylint: disable=redefined-builtin+    def __init__(self, read=False, write=False, delete=False, delete_version=False,

I think this class is exposed publicly right? The new parameter will need to go at the end of the method signature - otherwise it will be a breaking change for anyone who specified arguments positionally.

xiafu-msft

comment created time in 18 days

Pull request review commentAzure/azure-sdk-for-python

Blob versioning

 class BlobSasPermissions(object):         destination of a copy operation within the same account.     :param bool delete:         Delete the blob.+    :param bool delete_version:+        Delete the blob version for the versioning enabled storage account.

I'm confused by this.... does it mean we are giving permission to delete non-current blob versions? Or any blob in a version-enabled-account?

xiafu-msft

comment created time in 18 days

Pull request review commentAzure/azure-sdk-for-python

Blob versioning

 def generate_blob_sas(         container_name,  # type: str         blob_name,  # type: str         snapshot=None,  # type: Optional[str]+        version_id=None,  # type: Optional[str]

Same comment as above - needs to move to the end of the parameters, or it could be a breaking change.

xiafu-msft

comment created time in 18 days

Pull request review commentAzure/azure-sdk-for-python

Blob versioning

 def _delete_blob_options(self, delete_snapshots=False, **kwargs):             raise ValueError("The delete_snapshots option cannot be used with a specific snapshot.")         options = self._generic_delete_blob_options(delete_snapshots, **kwargs)         options['snapshot'] = self.snapshot+        options['version_id'] = kwargs.pop('version_id', None) or self.version_id

If we're also supporting version override for download and delete we should add to the docstring :)

xiafu-msft

comment created time in 18 days

Pull request review commentAzure/azure-sdk-for-python

Blob versioning

 def get_blob_properties(self, **kwargs):             Required if the blob has an active lease. Value can be a BlobLeaseClient object             or the lease ID as a string.         :paramtype lease: ~azure.storage.blob.BlobLeaseClient or str+        :keyword str version_id:+            The version id parameter is an opaque DateTime+            value that, when present, specifies the version of the blob to delete.+            It for service version 2019-10-10 and newer.

"Introduced in service version 2019-10-10" Also add the docstring tag for new parameters in the SDK.

xiafu-msft

comment created time in 18 days

Pull request review commentAzure/azure-sdk-for-python

Blob versioning

 def get_blob_properties(self, **kwargs):             Required if the blob has an active lease. Value can be a BlobLeaseClient object             or the lease ID as a string.         :paramtype lease: ~azure.storage.blob.BlobLeaseClient or str+        :keyword str version_id:+            The version id parameter is an opaque DateTime+            value that, when present, specifies the version of the blob to delete.

"blob to delete"

xiafu-msft

comment created time in 18 days

Pull request review commentAzure/azure-sdk-for-python

Undelete container

 def delete_container(         except StorageErrorException as error:             process_storage_error(error) +    @distributed_trace+    def undelete_container(self, deleted_container_name, deleted_container_version, **kwargs):

The ContainerClient already represents a single named container. So we should remove this parameter and use the clients container_name directly. If we must pass in this parameter - then this operation should move up to the parent client.

xiafu-msft

comment created time in 18 days

Pull request review commentAzure/azure-sdk-for-python

Undelete container

 def list_containers(         :param bool include_metadata:             Specifies that container metadata to be returned in the response.             The default value is `False`.+        :keyword bool include_deleted:+            Specifies that deleted containers to be returned in the response. This is for container restore enabled+            account. The default value is `False`.

Can we add the docstring tag for SDK version that this was introduced?

xiafu-msft

comment created time in 18 days

Pull request review commentAzure/azure-sdk-for-python

Undelete container

 def delete_container(         except StorageErrorException as error:             process_storage_error(error) +    @distributed_trace+    def undelete_container(self, deleted_container_name, deleted_container_version, **kwargs):+        # type: (str, str, **Any) -> None+        """Restores soft-deleted container.++        Operation will only be successful if used within the specified number of days+        set in the delete retention policy.++        :param str deleted_container_name:+            Specifies the name of the deleted container to restore.+            Servivce Version 2019-12-12 and laster.+        :param str deleted_container_version:+            Specifies the version of the deleted container to restore.+            Servivce Version 2019-12-12 and laster.

Typo: "later" We should also add the docstring tag for which SDK version this was introduced in.

xiafu-msft

comment created time in 18 days

Pull request review commentAzure/azure-sdk-for-python

Undelete share

 def delete_share(         except StorageErrorException as error:             process_storage_error(error) +    @distributed_trace+    def undelete_share(self, deleted_share_name, deleted_share_version, **kwargs):+        # type: (str, str, **Any) -> None+        """Restores soft-deleted share.++        Operation will only be successful if used within the specified number of days+        set in the delete retention policy.++        :param str deleted_share_name:+            Specifies the name of the deleted share to restore.+            Service Version 2019-12-12 and later.+        :param str deleted_share_version:+            Specifies the version of the deleted share to restore.+            Service Version 2019-12-12 and later.+        :keyword int timeout:+            The timeout parameter is expressed in seconds.+        :rtype: None

Could we also add the docstring tag to indicate which version of the SDK this method was introduced in?

xiafu-msft

comment created time in 18 days

Pull request review commentAzure/azure-sdk-for-python

Undelete share

 def delete_share(         except StorageErrorException as error:             process_storage_error(error) +    @distributed_trace+    def undelete_share(self, deleted_share_name, deleted_share_version, **kwargs):

I see we're checking the name against self.share_name below - in that case we should definitely remove this parameter and just use self.share_name directly.

xiafu-msft

comment created time in 18 days

Pull request review commentAzure/azure-sdk-for-python

Undelete share

 def delete_share(         except StorageErrorException as error:             process_storage_error(error) +    @distributed_trace+    def undelete_share(self, deleted_share_name, deleted_share_version, **kwargs):

deleted_share_name shouldn't be a parameter - as the ShareClient already has a name. If we want to pass in a name to this function - then we should move this function up to the parent client.

xiafu-msft

comment created time in 18 days

pull request commentAzure/azure-sdk-for-python

[DataLake][BugFix]encode the rename source url

LGTM - but please add a bugfix line to the changelog referencing the issue number :)

xiafu-msft

comment created time in 18 days

Pull request review commentAzure/azure-sdk-for-python

[Blob][STG73]Blob Tags

+# coding: utf-8+# -------------------------------------------------------------------------+# Copyright (c) Microsoft Corporation. All rights reserved.+# Licensed under the MIT License. See License.txt in the project root for+# license information.+# --------------------------------------------------------------------------+from enum import Enum++from _shared.asynctestcase import AsyncStorageTestCase++try:+    from urllib.parse import quote+except ImportError:+    from urllib2 import quote++from _shared.testcase import GlobalStorageAccountPreparer+from azure.core.exceptions import (+    ResourceExistsError)+from azure.storage.blob import BlobBlock+from azure.storage.blob.aio import BlobServiceClient+#------------------------------------------------------------------------------++TEST_CONTAINER_PREFIX = 'container'+TEST_BLOB_PREFIX = 'blob'+#------------------------------------------------------------------------------++class StorageBlobTagsTest(AsyncStorageTestCase):++    async def _setup(self, storage_account, key):+        self.bsc = BlobServiceClient(self.account_url(storage_account, "blob"), credential=key)+        self.container_name = self.get_resource_name("container")+        if self.is_live:+            container = self.bsc.get_container_client(self.container_name)+            try:+                await container.create_container(timeout=5)+            except ResourceExistsError:+                pass+        self.byte_data = self.get_random_bytes(1024)++    #--Helpers-----------------------------------------------------------------+    def _get_blob_reference(self):+        return self.get_resource_name(TEST_BLOB_PREFIX)++    async def _create_block_blob(self, blob_tags=None, container_name=None):+        blob_name = self._get_blob_reference()+        blob_client = self.bsc.get_blob_client(container_name or self.container_name, blob_name)+        resp = await blob_client.upload_blob(self.byte_data, length=len(self.byte_data), overwrite=True, blob_tags=blob_tags)+        return blob_client, resp++    async def _create_empty_block_blob(self):+        blob_name = self._get_blob_reference()+        blob_client = self.bsc.get_blob_client(self.container_name, blob_name)+        resp = await blob_client.upload_blob(b'', length=0, overwrite=True)+        return blob_client, resp++    async def _create_append_blob(self, blob_tags=None):+        blob_name = self._get_blob_reference()+        blob_client = self.bsc.get_blob_client(self.container_name, blob_name)+        resp = await blob_client.create_append_blob(blob_tags=blob_tags)+        return blob_client, resp++    async def _create_page_blob(self, blob_tags=None):+        blob_name = self._get_blob_reference()+        blob_client = self.bsc.get_blob_client(self.container_name, blob_name)+        resp = await blob_client.create_page_blob(blob_tags=blob_tags, size=512)+        return blob_client, resp++    async def _create_container(self, prefix="container"):+        container_name = self.get_resource_name(prefix)+        try:+            await self.bsc.create_container(container_name)+        except:+            pass+        return container_name++    #-- test cases for blob tags ----------------------------------------------++    @GlobalStorageAccountPreparer()+    @AsyncStorageTestCase.await_prepared_test+    async def test_set_blob_tags(self, resource_group, location, storage_account, storage_account_key):+        await self._setup(storage_account, storage_account_key)+        blob_client, _ = await self._create_block_blob()++        # Act+        blob_tags = {"tag1": "firsttag", "tag2": "secondtag", "tag3": "thirdtag"}+        resp = await blob_client.set_blob_tags(blob_tags)++        # Assert+        self.assertIsNotNone(resp)++    @GlobalStorageAccountPreparer()+    @AsyncStorageTestCase.await_prepared_test+    async def test_set_blob_tags_for_a_version(self, resource_group, location, storage_account, storage_account_key):+        await self._setup(storage_account, storage_account_key)+        # use this version to set tag+        blob_client, resp = await self._create_block_blob()+        await self._create_block_blob()+        # TODO: enable versionid for this account and test set tag for a version++        # Act+        blob_tags = {"tag1": "firsttag", "tag2": "secondtag", "tag3": "thirdtag"}+        resp = await blob_client.set_blob_tags(blob_tags, version_id=resp['version_id'])++        # Assert+        self.assertIsNotNone(resp)++    @GlobalStorageAccountPreparer()+    @AsyncStorageTestCase.await_prepared_test+    async def test_get_blob_tags(self, resource_group, location, storage_account, storage_account_key):+        await self._setup(storage_account, storage_account_key)+        blob_client, resp = await self._create_block_blob()++        # Act+        blob_tags = {"tag1": "firsttag", "tag2": "secondtag", "tag3": "thirdtag"}+        await blob_client.set_blob_tags(blob_tags)++        resp = await blob_client.get_blob_tags()++        # Assert+        self.assertIsNotNone(resp)+        self.assertEqual(len(resp), 3)+        for key, value in resp.items():+            self.assertEqual(blob_tags[key], value)++    @GlobalStorageAccountPreparer()+    @AsyncStorageTestCase.await_prepared_test+    async def test_get_blob_tags_for_a_snapshot(self, resource_group, location, storage_account, storage_account_key):+        await self._setup(storage_account, storage_account_key)+        blob_tags = {"+-./:=_ ": "firsttag", "tag2": "+-./:=_", "+-./:=_1": "+-./:=_"}+        blob_client, resp = await self._create_block_blob(blob_tags=blob_tags)++        snapshot = await blob_client.create_snapshot()+        snapshot_client = self.bsc.get_blob_client(self.container_name, blob_client.blob_name, snapshot=snapshot)++        resp = await snapshot_client.get_blob_tags()++        # Assert+        self.assertIsNotNone(resp)+        self.assertEqual(len(resp), 3)+        for key, value in resp.items():+            self.assertEqual(blob_tags[key], value)++    @GlobalStorageAccountPreparer()+    @AsyncStorageTestCase.await_prepared_test+    async def test_upload_block_blob_with_tags(self, resource_group, location, storage_account, storage_account_key):+        await self._setup(storage_account, storage_account_key)+        blob_tags = {"tag1": "firsttag", "tag2": "secondtag", "tag3": "thirdtag"}+        blob_client, resp = await self._create_block_blob(blob_tags=blob_tags)++        resp = await blob_client.get_blob_tags()++        # Assert+        self.assertIsNotNone(resp)+        self.assertEqual(len(resp), 3)++    @GlobalStorageAccountPreparer()+    @AsyncStorageTestCase.await_prepared_test+    async def test_get_blob_properties_returns_tags_num(self, resource_group, location, storage_account, storage_account_key):+        await self._setup(storage_account, storage_account_key)+        blob_tags = {"tag1": "firsttag", "tag2": "secondtag", "tag3": "thirdtag"}+        blob_client, resp = await self._create_block_blob(blob_tags=blob_tags)++        resp = await blob_client.get_blob_properties()+        downloaded = await blob_client.download_blob()++        # Assert+        self.assertIsNotNone(resp)+        self.assertEqual(resp.tag_count, len(blob_tags))+        self.assertEqual(downloaded.properties.tag_count, len(blob_tags))++    @GlobalStorageAccountPreparer()+    @AsyncStorageTestCase.await_prepared_test+    async def test_create_append_blob_with_tags(self, resource_group, location, storage_account, storage_account_key):+        await self._setup(storage_account, storage_account_key)+        blob_tags = {"+-./:=_ ": "firsttag", "tag2": "+-./:=_", "+-./:=_1": "+-./:=_"}

Do we need to test case sensitivity?

xiafu-msft

comment created time in 18 days

Pull request review commentAzure/azure-sdk-for-python

[Blob][STG73]Blob Tags

 def _from_generated(cls, generated):         blob.blob_tier_inferred = generated.properties.access_tier_inferred         blob.archive_status = generated.properties.archive_status         blob.blob_tier_change_time = generated.properties.access_tier_change_time+        blob.tag_count = generated.properties.tag_count+        blob.blob_tags = blob._parse_tags(generated.blob_tags)  # pylint: disable=protected-access         return blob +    @classmethod+    def _parse_tags(cls, generated_tags):

This should be either an @staticmethod of a separate function.

xiafu-msft

comment created time in 18 days

Pull request review commentAzure/azure-sdk-for-python

[Blob][STG73]Blob Tags

 class BlobProperties(DictMixin):         container-level scope is configured to allow overrides. Otherwise an error will be raised.     :ivar bool request_server_encrypted:         Whether this blob is encrypted.+    :ivar bool tag_count:+        Tags count on this blob.

"tag_count" means that it's an integer - but this is a bool?

xiafu-msft

comment created time in 18 days

Pull request review commentAzure/azure-sdk-for-python

[Blob][STG73]Blob Tags

 def list_containers(                 page_iterator_class=ContainerPropertiesPaged             ) +    @distributed_trace+    def filter_blobs(self, where=None, **kwargs):

hmm let me think about this name.... We're calling it this across all languages?

xiafu-msft

comment created time in 18 days

Pull request review commentAzure/azure-sdk-for-python

[Blob][STG73]Blob Tags

 def set_premium_page_blob_tier(self, premium_page_blob_tier, **kwargs):         except StorageErrorException as error:             process_storage_error(error) +    def _set_blob_tags_options(self, blob_tags=None, **kwargs):+        # type: (Optional[Dict[str, str]], **Any) -> Dict[str, Any]+        headers = kwargs.pop('headers', {})++        tags = serialize_blob_tags(blob_tags)++        options = {+            'tags': tags,+            'version_id': kwargs.pop('version_id', None),+            'timeout': kwargs.pop('timeout', None),+            'cls': return_response_headers,+            'headers': headers}+        options.update(kwargs)+        return options++    @distributed_trace+    def set_blob_tags(self, blob_tags=None, **kwargs):+        # type: (Optional[Dict[str, str]], **Any) -> Dict[str, Any]+        """The Set Tags operation enables users to set tags on a blob or specific blob version, but not snapshot.++        :param blob_tags:+            Blob tags+        :type blob_tags: dict(str, str)+        :keyword str version_id:+            The version id parameter is an opaque DateTime+            value that, when present, specifies the version of the blob to delete.+            It for service version 2019-10-10 and newer.+        :keyword bool validate_content:

I'm curious how validate content is applicable for this function? Does changing the tags affect the md5? We should definitely change the docstring description - which talks about data transfer and no content is transferred in this function.

xiafu-msft

comment created time in 18 days

Pull request review commentAzure/azure-sdk-for-python

[Blob][STG73]Blob Tags

 def set_premium_page_blob_tier(self, premium_page_blob_tier, **kwargs):         except StorageErrorException as error:             process_storage_error(error) +    def _set_blob_tags_options(self, blob_tags=None, **kwargs):+        # type: (Optional[Dict[str, str]], **Any) -> Dict[str, Any]+        headers = kwargs.pop('headers', {})++        tags = serialize_blob_tags(blob_tags)++        options = {+            'tags': tags,+            'version_id': kwargs.pop('version_id', None),+            'timeout': kwargs.pop('timeout', None),+            'cls': return_response_headers,+            'headers': headers}+        options.update(kwargs)+        return options++    @distributed_trace+    def set_blob_tags(self, blob_tags=None, **kwargs):+        # type: (Optional[Dict[str, str]], **Any) -> Dict[str, Any]+        """The Set Tags operation enables users to set tags on a blob or specific blob version, but not snapshot.++        :param blob_tags:+            Blob tags+        :type blob_tags: dict(str, str)+        :keyword str version_id:+            The version id parameter is an opaque DateTime+            value that, when present, specifies the version of the blob to delete.+            It for service version 2019-10-10 and newer.

"Introduced in service version...." Also - could we please add the docstring tag for the SDK version this parameter was introduced in?

xiafu-msft

comment created time in 18 days

Pull request review commentAzure/azure-sdk-for-python

[Blob][STG73]Blob Tags

 def set_premium_page_blob_tier(self, premium_page_blob_tier, **kwargs):         except StorageErrorException as error:             process_storage_error(error) +    def _set_blob_tags_options(self, blob_tags=None, **kwargs):+        # type: (Optional[Dict[str, str]], **Any) -> Dict[str, Any]+        headers = kwargs.pop('headers', {})++        tags = serialize_blob_tags(blob_tags)++        options = {+            'tags': tags,+            'version_id': kwargs.pop('version_id', None),+            'timeout': kwargs.pop('timeout', None),+            'cls': return_response_headers,+            'headers': headers}+        options.update(kwargs)+        return options++    @distributed_trace+    def set_blob_tags(self, blob_tags=None, **kwargs):+        # type: (Optional[Dict[str, str]], **Any) -> Dict[str, Any]+        """The Set Tags operation enables users to set tags on a blob or specific blob version, but not snapshot.++        :param blob_tags:+            Blob tags+        :type blob_tags: dict(str, str)+        :keyword str version_id:+            The version id parameter is an opaque DateTime+            value that, when present, specifies the version of the blob to delete.

"blob to delete"?

xiafu-msft

comment created time in 18 days

more