profile
viewpoint
Metin Dumandag mdumandag @Hazelcast İstanbul

push eventmdumandag/hazelcast-client-protocol

mdumandag

commit sha 116ddaf3f64d4179b1b906c2c94ea34fa0c59eeb

use client message as iterator

view details

push time in 2 days

push eventmdumandag/hazelcast-nodejs-client

mdumandag

commit sha 8e7bc96278893a2f23f19134ed01edd7d3385e61

set initial seed cap to 65k

view details

push time in 2 days

Pull request review commenthazelcast/hazelcast-nodejs-client

[WIP] Client 4.0

+/*+ * Copyright (c) 2008-2020, Hazelcast, Inc. All Rights Reserved.+ *+ * Licensed under the Apache License, Version 2.0 (the "License");+ * you may not use this file except in compliance with the License.+ * You may obtain a copy of the License at+ *+ * http://www.apache.org/licenses/LICENSE-2.0+ *+ * Unless required by applicable law or agreed to in writing, software+ * distributed under the License is distributed on an "AS IS" BASIS,+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+ * See the License for the specific language governing permissions and+ * limitations under the License.+ */++import {AbstractLoadBalancer} from './AbstractLoadBalancer';+import {randomInt} from '../Util';+import {Member} from '../core/Member';++// tslint:disable-next-line:no-bitwise+const INITIAL_SEED_CAP = Math.ceil(Number.MAX_SAFE_INTEGER / (1 << 16));

I thought you meant set the divisor to (1 << 16) :smile:. My bad

mdumandag

comment created time in 2 days

push eventmdumandag/hazelcast-nodejs-client

mdumandag

commit sha 302e85b44f525f8b2e0cad5020203a394a7210bb

use client message as the iterator and remove frameIterator method

view details

push time in 2 days

push eventmdumandag/hazelcast-nodejs-client

mdumandag

commit sha df17b7b1c7e98de16a30dad021c318556d479bbb

revert back to using random starting index for round robin lb

view details

push time in 3 days

Pull request review commenthazelcast/hazelcast-nodejs-client

[WIP] Client 4.0

+/*+ * Copyright (c) 2008-2020, Hazelcast, Inc. All Rights Reserved.+ *+ * Licensed under the Apache License, Version 2.0 (the "License");+ * you may not use this file except in compliance with the License.+ * You may obtain a copy of the License at+ *+ * http://www.apache.org/licenses/LICENSE-2.0+ *+ * Unless required by applicable law or agreed to in writing, software+ * distributed under the License is distributed on an "AS IS" BASIS,+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+ * See the License for the specific language governing permissions and+ * limitations under the License.+ */++import {AbstractLoadBalancer} from './AbstractLoadBalancer';+import {randomInt} from '../Util';+import {Member} from '../core/Member';++/**+ * A {@link LoadBalancer} implementation that relies on using round robin+ * to a next member to send a request to.+ */+export class RoundRobinLB extends AbstractLoadBalancer {+    private index: number;++    constructor() {+        super();+        this.index = randomInt(Date.now());+    }++    next(): Member {+        const members = this.getMembers();+        if (members == null || members.length === 0) {+            return null;+        }++        const length = members.length;+        const idx = (this.index++) % length;

I think this is better than the nanotime based solution.

mdumandag

comment created time in 3 days

Pull request review commenthazelcast/hazelcast-python-client

Initial ReliableTopic implementation: #201

+import os+import time+from datetime import datetime++import hazelcast+from hazelcast import ClientConfig+from hazelcast.config import ReliableTopicConfig, TOPIC_OVERLOAD_POLICY, ClientProperties+from hazelcast.exception import IllegalArgumentError, TopicOverflowError+from hazelcast.proxy.reliable_topic import ReliableMessageListener+from hazelcast.proxy.ringbuffer import OVERFLOW_POLICY_FAIL, OVERFLOW_POLICY_OVERWRITE+from hazelcast.serialization.reliable_topic import ReliableTopicMessage+from hazelcast.util import current_time_in_millis+from tests.base import SingleMemberTestCase+from tests.util import random_string, event_collector+++class _ReliableTopicTestException(BaseException):+    pass+++class TestReliableMessageListener(ReliableMessageListener):+    def __init__(self, collector):+        self._collector = collector++    def on_message(self, event):+        self._collector(event)+++class TestReliableMessageListenerLossTolerant(ReliableMessageListener):+    def __init__(self, collector):+        self._collector = collector++    def on_message(self, event):+        self._collector(event)++    def is_loss_tolerant(self):+        return True+++class ReliableTopicTest(SingleMemberTestCase):+    @classmethod+    def configure_cluster(cls):+        path = os.path.abspath(__file__)+        dir_path = os.path.dirname(path)+        with open(os.path.join(dir_path, "hazelcast_topic.xml")) as f:+            return f.read()++    def setUp(self):+        config = ClientConfig()+        config.set_property("hazelcast.serialization.input.returns.bytearray", True)++        discard_config = ReliableTopicConfig("discard")+        discard_config.topic_overload_policy = TOPIC_OVERLOAD_POLICY.DISCARD_NEWEST+        config.add_reliable_topic_config(discard_config)++        overwrite_config = ReliableTopicConfig("overwrite")+        overwrite_config.topic_overload_policy = TOPIC_OVERLOAD_POLICY.DISCARD_OLDEST+        config.add_reliable_topic_config(overwrite_config)++        error_config = ReliableTopicConfig("error")+        error_config.topic_overload_policy = TOPIC_OVERLOAD_POLICY.ERROR+        config.add_reliable_topic_config(error_config)++        stale_config = ReliableTopicConfig("stale")+        stale_config.topic_overload_policy = TOPIC_OVERLOAD_POLICY.DISCARD_OLDEST+        config.add_reliable_topic_config(stale_config)++        self.client = hazelcast.HazelcastClient(self.configure_client(config))+        self.reliable_topic = self.client.get_reliable_topic(random_string()).blocking()+        self.registration_id = None++    def tearDown(self):+        if self.registration_id is not None:+            self.reliable_topic.remove_listener(self.registration_id)

could you shutdown self.client after this line ?

buraksezer

comment created time in 3 days

Pull request review commenthazelcast/hazelcast-python-client

Initial ReliableTopic implementation: #201

+import os+import time+from datetime import datetime++import hazelcast+from hazelcast import ClientConfig+from hazelcast.config import ReliableTopicConfig, TOPIC_OVERLOAD_POLICY, ClientProperties+from hazelcast.exception import IllegalArgumentError, TopicOverflowError+from hazelcast.proxy.reliable_topic import ReliableMessageListener+from hazelcast.proxy.ringbuffer import OVERFLOW_POLICY_FAIL, OVERFLOW_POLICY_OVERWRITE+from hazelcast.serialization.reliable_topic import ReliableTopicMessage+from hazelcast.util import current_time_in_millis+from tests.base import SingleMemberTestCase+from tests.util import random_string, event_collector+++class _ReliableTopicTestException(BaseException):+    pass+++class TestReliableMessageListener(ReliableMessageListener):+    def __init__(self, collector):+        self._collector = collector++    def on_message(self, event):+        self._collector(event)+++class TestReliableMessageListenerLossTolerant(ReliableMessageListener):+    def __init__(self, collector):+        self._collector = collector++    def on_message(self, event):+        self._collector(event)++    def is_loss_tolerant(self):+        return True+++class ReliableTopicTest(SingleMemberTestCase):+    @classmethod+    def configure_cluster(cls):+        path = os.path.abspath(__file__)+        dir_path = os.path.dirname(path)+        with open(os.path.join(dir_path, "hazelcast_topic.xml")) as f:+            return f.read()++    def setUp(self):+        config = ClientConfig()+        config.set_property("hazelcast.serialization.input.returns.bytearray", True)++        discard_config = ReliableTopicConfig("discard")+        discard_config.topic_overload_policy = TOPIC_OVERLOAD_POLICY.DISCARD_NEWEST+        config.add_reliable_topic_config(discard_config)++        overwrite_config = ReliableTopicConfig("overwrite")+        overwrite_config.topic_overload_policy = TOPIC_OVERLOAD_POLICY.DISCARD_OLDEST+        config.add_reliable_topic_config(overwrite_config)++        error_config = ReliableTopicConfig("error")+        error_config.topic_overload_policy = TOPIC_OVERLOAD_POLICY.ERROR+        config.add_reliable_topic_config(error_config)++        stale_config = ReliableTopicConfig("stale")+        stale_config.topic_overload_policy = TOPIC_OVERLOAD_POLICY.DISCARD_OLDEST+        config.add_reliable_topic_config(stale_config)++        self.client = hazelcast.HazelcastClient(self.configure_client(config))+        self.reliable_topic = self.client.get_reliable_topic(random_string()).blocking()+        self.registration_id = None++    def tearDown(self):+        if self.registration_id is not None:+            self.reliable_topic.remove_listener(self.registration_id)++    def test_add_listener(self):+        collector = event_collector()+        reliable_listener = TestReliableMessageListener(collector)+        self.registration_id = self.reliable_topic.add_listener(reliable_listener)+        self.reliable_topic.publish('item-value')++        def assert_event():+            self.assertEqual(len(collector.events), 1)+            event = collector.events[0]+            self.assertEqual(event.message, 'item-value')+            self.assertGreater(event.publish_time, 0)++        self.assertTrueEventually(assert_event, 5)++    def test_remove_listener(self):+        collector = event_collector()+        reliable_listener = TestReliableMessageListener(collector)++        reg_id = self.reliable_topic.add_listener(reliable_listener)+        removed = self.reliable_topic.remove_listener(reg_id)+        self.assertTrue(removed, True)++    def test_none_listener(self):+        with self.assertRaises(IllegalArgumentError):+            self.reliable_topic.add_listener("invalid-listener")++    def test_remove_listener_when_does_not_exist(self):+        with self.assertRaises(IllegalArgumentError):+            self.reliable_topic.remove_listener("id")++    def test_remove_listener_when_already_removed(self):+        collector = event_collector()+        reliable_listener = TestReliableMessageListener(collector)++        reg_id = self.reliable_topic.add_listener(reliable_listener)+        self.reliable_topic.remove_listener(reg_id)++        with self.assertRaises(IllegalArgumentError):+            self.reliable_topic.remove_listener(reg_id)++    def test_error_on_message_not_terminal(self):+        collector = event_collector()++        class TestReliableMessageListenerNotTerminal(ReliableMessageListener):+            def __init__(self, _collector):+                self._collector = _collector++            def on_message(self, event):+                if event.message == "raise-exception":+                    raise _ReliableTopicTestException("test-exception")++                self._collector(event)++            def is_terminal(self):+                return False++        reliable_listener = TestReliableMessageListenerNotTerminal(collector)+        self.registration_id = self.reliable_topic.add_listener(reliable_listener)++        self.reliable_topic.publish('raise-exception')+        self.reliable_topic.publish('work-normally')++        def assert_event():+            self.assertEqual(len(collector.events), 1)+            event = collector.events[0]+            self.assertEqual(event.message, 'work-normally')+            self.assertGreater(event.publish_time, 0)++        self.assertTrueEventually(assert_event, 5)++    def test_error_on_message_terminal(self):+        collector = event_collector()++        class TestReliableMessageListenerTerminal(ReliableMessageListener):+            def __init__(self, _collector):+                self._collector = _collector++            def on_message(self, event):+                if event.message == "raise-exception":+                    raise _ReliableTopicTestException("test-exception")++                self._collector(event)++            def is_terminal(self):+                return True++        reliable_listener = TestReliableMessageListenerTerminal(collector)+        # This listener will be removed by the ReliableTopic implementation+        self.reliable_topic.add_listener(reliable_listener)++        self.reliable_topic.publish('raise-exception')+        self.reliable_topic.publish('work-normally')+        time.sleep(0.5)+        self.assertEqual(len(collector.events), 0)++    def test_error_on_message_terminal_exception(self):+        collector = event_collector()++        class TestReliableMessageListenerTerminal(ReliableMessageListener):+            def __init__(self, _collector):+                self._collector = _collector++            def on_message(self, event):+                if event.message == "raise-exception":+                    raise _ReliableTopicTestException("test-exception in on_message")++                self._collector(event)++            def is_terminal(self):+                raise _ReliableTopicTestException("is_terminal failed")++        reliable_listener = TestReliableMessageListenerTerminal(collector)+        self.registration_id = self.reliable_topic.add_listener(reliable_listener)++        self.reliable_topic.publish('raise-exception')+        self.reliable_topic.publish('work-normally')+        time.sleep(0.5)+        self.assertEqual(len(collector.events), 0)++    def test_publish_many(self):+        collector = event_collector()+        reliable_listener = TestReliableMessageListener(collector)+        self.registration_id = self.reliable_topic.add_listener(reliable_listener)+        for i in range(10):+            self.reliable_topic.publish('message ' + str(i))++        def assert_event():+            self.assertEqual(len(collector.events), 10)++        self.assertTrueEventually(assert_event, 10)++    def test_message_field_set_correctly(self):+        collector = event_collector()+        reliable_listener = TestReliableMessageListener(collector)+        self.registration_id = self.reliable_topic.add_listener(reliable_listener)++        before_publish_time = current_time_in_millis()+        time.sleep(0.1)+        self.reliable_topic.publish('item-value')+        time.sleep(0.1)+        after_publish_time = current_time_in_millis()++        def assert_event():+            self.assertEqual(len(collector.events), 1)+            event = collector.events[0]+            self.assertEqual(event.message, 'item-value')+            self.assertGreater(event.publish_time, before_publish_time)+            self.assertLess(event.publish_time, after_publish_time)++        self.assertTrueEventually(assert_event, 5)++    def test_always_start_after_tail(self):+        collector = event_collector()+        reliable_listener = TestReliableMessageListener(collector)+        self.reliable_topic.publish('1')+        self.reliable_topic.publish('2')+        self.reliable_topic.publish('3')++        self.registration_id = self.reliable_topic.add_listener(reliable_listener)++        self.reliable_topic.publish('4')+        self.reliable_topic.publish('5')+        self.reliable_topic.publish('6')++        def assert_event():+            self.assertEqual(len(collector.events), 3)+            self.assertEqual(collector.events[0].message, "4")+            self.assertEqual(collector.events[1].message, "5")+            self.assertEqual(collector.events[2].message, "6")++        self.assertTrueEventually(assert_event, 5)++    def generate_items(self, n):+        messages = []+        for i in range(n):+            msg = ReliableTopicMessage(+                publish_time=current_time_in_millis(),+                publisher_address="",+                payload=self.client.serialization_service.to_data(i+1)+            )+            messages.append(msg)++        return messages++    def test_discard(self):+        reliable_topic = self.client.get_reliable_topic("discard").blocking()+        items = self.generate_items(10)+        reliable_topic.ringbuffer.add_all(items, OVERFLOW_POLICY_FAIL)++        reliable_topic.publish(11)+        seq = reliable_topic.ringbuffer.tail_sequence().result()+        item = reliable_topic.ringbuffer.read_one(seq).result()+        num = self.client.serialization_service.to_object(item.payload)+        self.assertEqual(num, 10)++    def test_overwrite(self):+        reliable_topic = self.client.get_reliable_topic("overwrite").blocking()+        for i in range(10):+            reliable_topic.publish(i+1)++        reliable_topic.publish(11)+        seq = reliable_topic.ringbuffer.tail_sequence().result()+        item = reliable_topic.ringbuffer.read_one(seq).result()+        num = self.client.serialization_service.to_object(item.payload)+        self.assertEqual(num, 11)++    def test_error(self):+        reliable_topic = self.client.get_reliable_topic("error").blocking()+        for i in range(10):+            reliable_topic.publish(i+1)++        with self.assertRaises(TopicOverflowError):+            reliable_topic.publish(11)++    def test_blocking(self):+        reliable_topic = self.client.get_reliable_topic("blocking").blocking()+        for i in range(10):+            reliable_topic.publish(i+1)++        before = datetime.utcnow()+        reliable_topic.publish(11)+        time_diff = datetime.utcnow() - before++        seq = reliable_topic.ringbuffer.tail_sequence().result()+        item = reliable_topic.ringbuffer.read_one(seq).result()+        num = self.client.serialization_service.to_object(item.payload)+        self.assertEqual(num, 11)+        if time_diff.seconds <= 2:+            self.fail("expected at least 2 seconds delay got %s" % time_diff.seconds)++    def test_stale(self):+        collector = event_collector()+        self.reliable_topic = self.client.get_reliable_topic("stale").blocking()+        reliable_listener = TestReliableMessageListenerLossTolerant(collector)+        self.registration_id = self.reliable_topic.add_listener(reliable_listener)++        items = self.generate_items(20)+        self.reliable_topic.ringbuffer.add_all(items, overflow_policy=OVERFLOW_POLICY_OVERWRITE)++        def assert_event():+            self.assertEqual(len(collector.events), 10)+            event = collector.events[9]+            self.assertEqual(event.message, 20)++        self.assertTrueEventually(assert_event, 5)++    def test_distributed_object_destroyed(self):+        config = ClientConfig()+        config.network_config.connection_attempt_limit = 10+        config.set_property(ClientProperties.INVOCATION_TIMEOUT_SECONDS.name, 10)+        config.set_property("hazelcast.serialization.input.returns.bytearray", True)+++        client_two = hazelcast.HazelcastClient(self.configure_client(config))+        # TODO: shutdown++        collector = event_collector()+        self.reliable_topic = client_two.get_reliable_topic("x")+        reliable_listener = TestReliableMessageListenerLossTolerant(collector)+        self.registration_id = self.reliable_topic.add_listener(reliable_listener)++        self.rc.shutdownCluster(self.cluster.id)+        self.cluster = self.create_cluster(self.rc, self.configure_cluster())+        self.cluster.start_member()++        self.reliable_topic.publish("aa")++        def assert_event():+            self.assertEqual(len(collector.events), 1)+            event = collector.events[0]+            self.assertEqual(event.message, "aa")++        self.assertTrueEventually(assert_event, 5)

You can shutdown the client_two by putting this line in try block and performing shutdown in the finally block.

buraksezer

comment created time in 3 days

Pull request review commenthazelcast/hazelcast-python-client

Initial ReliableTopic implementation: #201

+import os+import time+from datetime import datetime++import hazelcast+from hazelcast import ClientConfig+from hazelcast.config import ReliableTopicConfig, TOPIC_OVERLOAD_POLICY, ClientProperties+from hazelcast.exception import IllegalArgumentError, TopicOverflowError+from hazelcast.proxy.reliable_topic import ReliableMessageListener+from hazelcast.proxy.ringbuffer import OVERFLOW_POLICY_FAIL, OVERFLOW_POLICY_OVERWRITE+from hazelcast.serialization.reliable_topic import ReliableTopicMessage+from hazelcast.util import current_time_in_millis+from tests.base import SingleMemberTestCase+from tests.util import random_string, event_collector+++class _ReliableTopicTestException(BaseException):+    pass+++class TestReliableMessageListener(ReliableMessageListener):+    def __init__(self, collector):+        self._collector = collector++    def on_message(self, event):+        self._collector(event)+++class TestReliableMessageListenerLossTolerant(ReliableMessageListener):+    def __init__(self, collector):+        self._collector = collector++    def on_message(self, event):+        self._collector(event)++    def is_loss_tolerant(self):+        return True+++class ReliableTopicTest(SingleMemberTestCase):+    @classmethod+    def configure_cluster(cls):+        path = os.path.abspath(__file__)+        dir_path = os.path.dirname(path)+        with open(os.path.join(dir_path, "hazelcast_topic.xml")) as f:+            return f.read()++    def setUp(self):+        config = ClientConfig()+        config.set_property("hazelcast.serialization.input.returns.bytearray", True)

As I mentioned before, you can remove this line after performing the small fix I mentioned here https://github.com/hazelcast/hazelcast-python-client/pull/206#issuecomment-628522527 (item 1)

buraksezer

comment created time in 3 days

Pull request review commenthazelcast/hazelcast-python-client

Initial ReliableTopic implementation: #201

-from hazelcast.proxy.base import Proxy+import time+import threading+from uuid import uuid4 +from hazelcast.config import ReliableTopicConfig, TOPIC_OVERLOAD_POLICY+from hazelcast.exception import IllegalArgumentError, TopicOverflowError, HazelcastInstanceNotActiveError, \+    HazelcastClientNotActiveException, DistributedObjectDestroyedError, StaleSequenceError, OperationTimeoutError+from hazelcast.proxy.base import Proxy, TopicMessage+from hazelcast.proxy.ringbuffer import OVERFLOW_POLICY_FAIL, OVERFLOW_POLICY_OVERWRITE+from hazelcast.serialization.reliable_topic import ReliableTopicMessage+from hazelcast.util import current_time_in_millis -class ReliableTopic(Proxy):-    def add_listener(self, on_message=None):+_INITIAL_BACKOFF = 0.1+_MAX_BACKOFF = 2+++class ReliableMessageListener(object):+    def on_message(self, item):+        """+        Invoked when a message is received for the added reliable topic.++        :param: message the message that is received for the added reliable topic+        """         raise NotImplementedError +    def retrieve_initial_sequence(self):+        """+        Retrieves the initial sequence from which this ReliableMessageListener+        should start.++        Return -1 if there is no initial sequence and you want to start+        from the next published message.++        If you intend to create a durable subscriber so you continue from where+        you stopped the previous time, load the previous sequence and add 1.+        If you don't add one, then you will be receiving the same message twice.++        :return: (int), the initial sequence+        """+        return -1++    def store_sequence(self, sequence):+        """"+        Informs the ReliableMessageListener that it should store the sequence.+        This method is called before the message is processed. Can be used to+        make a durable subscription.++        :param: (int) ``sequence`` the sequence+        """+        pass++    def is_loss_tolerant(self):+        """+        Checks if this ReliableMessageListener is able to deal with message loss.+        Even though the reliable topic promises to be reliable, it can be that a+        MessageListener is too slow. Eventually the message won't be available+        anymore.++        If the ReliableMessageListener is not loss tolerant and the topic detects+        that there are missing messages, it will terminate the+        ReliableMessageListener.++        :return: (bool) ``True`` if the ReliableMessageListener is tolerant towards losing messages.+        """+        return False++    def is_terminal(self):+        """+        Checks if the ReliableMessageListener should be terminated based on an+        exception thrown while calling on_message.++        :return: (bool) ``True` if the ReliableMessageListener should terminate itself, ``False`` if it should keep on running.+        """+        raise False+++class _MessageListener(object):+    def __init__(self, uuid, proxy, to_object, listener):+        self._id = uuid+        self._proxy = proxy+        self._to_object = to_object+        self._listener = listener+        self._cancelled_lock = threading.Lock()+        self._cancelled = False+        self._sequence = 0++    def start(self):+        tail_seq = self._proxy.ringbuffer.tail_sequence()+        initial_seq = self._listener.retrieve_initial_sequence()+        if initial_seq == -1:+            initial_seq = tail_seq.result() + 1+        self._sequence = initial_seq+        self._proxy.client.reactor.add_timer(0, self._next)++    def _handle_illegal_argument_error(self):+        def on_response(res):+            head_seq = res.result()+            self._proxy.logger.warning("MessageListener {} on topic {} requested a too large sequence. Jumping from "+                                       "old sequence: {} to sequence: {}".format(self._id, self._proxy.name,+                                                                                 self._sequence,+                                                                                 head_seq))+            self._sequence = head_seq+            self._next()++        future = self._proxy.ringbuffer.head_sequence()+        future.add_done_callback(on_response)++    def _handle_stale_sequence_error(self):+        def on_response(res):+            head_seq = res.result()+            if self._listener.is_loss_tolerant:

This should call the function. if self._listener.is_loss_tolerant: should be if self._listener.is_loss_tolerant():

buraksezer

comment created time in 3 days

push eventmdumandag/hazelcast-client-protocol

mdumandag

commit sha 1992c44270ac0096f4bb0c41f3b4939354913f8a

address review comments

view details

push time in 3 days

Pull request review commenthazelcast/hazelcast-nodejs-client

[WIP] Client 4.0

 export default class HazelcastClient {     }      /**-     * Shuts down this client instance.+     * Returns the {@link AddressProvider} of the client.      */-    shutdown(): void {-        this.lifecycleService.emitLifecycleEvent(LifecycleEvent.shuttingDown);+    getAddressProvider(): AddressProvider {+        return this.addressProvider;+    }++    getLoadBalancer(): LoadBalancer {+        return this.loadBalancer;+    }++    doShutdown(): void {         if (this.mapRepairingTask !== undefined) {             this.mapRepairingTask.shutdown();         }         this.nearCacheManager.destroyAllNearCaches();-        this.statistics.stop();-        this.partitionService.shutdown();-        this.heartbeat.cancel();+        this.proxyManager.destroy();         this.connectionManager.shutdown();-        this.listenerService.shutdown();         this.invocationService.shutdown();-        this.lifecycleService.emitLifecycleEvent(LifecycleEvent.shutdown);+        this.listenerService.shutdown();+        this.statistics.stop();+    }++    /**+     * Shuts down this client instance.+     */+    shutdown(): void {+        this.getLifecycleService().shutdown();

The logic is okay but we are exposing the doShutdown method. In the Java client doShutdown is in the implementation, not on the public API. So, I think we could mark them as private and access them using as any trick. Would that be okay ? This is the same problem as sendStateToCluster

mdumandag

comment created time in 3 days

push eventmdumandag/hazelcast-nodejs-client

mdumandag

commit sha f1959f0db5bde54e51c368b876d46307bd6e1e76

fix client message test

view details

push time in 3 days

Pull request review commenthazelcast/hazelcast-nodejs-client

[WIP] Client 4.0

 export class LifecycleService extends EventEmitter {     isRunning(): boolean {         return this.active;     }++    public start(): void {+        this.emitLifecycleEvent(LifecycleState.STARTING);+        this.active = true;+        this.emitLifecycleEvent(LifecycleState.STARTED);

I don't know the reasoning behind these. I just copied the behaviour from Java client. But, I agree with you, they also seem to redundant to me

mdumandag

comment created time in 3 days

Pull request review commenthazelcast/hazelcast-nodejs-client

[WIP] Client 4.0

+/*+ * Copyright (c) 2008-2020, Hazelcast, Inc. All Rights Reserved.+ *+ * Licensed under the Apache License, Version 2.0 (the "License");+ * you may not use this file except in compliance with the License.+ * You may obtain a copy of the License at+ *+ * http://www.apache.org/licenses/LICENSE-2.0+ *+ * Unless required by applicable law or agreed to in writing, software+ * distributed under the License is distributed on an "AS IS" BASIS,+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+ * See the License for the specific language governing permissions and+ * limitations under the License.+ */++import {AbstractLoadBalancer} from './AbstractLoadBalancer';+import {randomInt} from '../Util';+import {Member} from '../core/Member';++/**+ * A {@link LoadBalancer} implementation that relies on using round robin+ * to a next member to send a request to.+ */+export class RoundRobinLB extends AbstractLoadBalancer {+    private index: number;++    constructor() {+        super();+        this.index = randomInt(Date.now());+    }++    next(): Member {+        const members = this.getMembers();+        if (members == null || members.length === 0) {+            return null;+        }++        const length = members.length;+        const idx = (this.index++) % length;

I set the cap as ceil(Number.MAX_SAFE_INTEGER / 1024). It would take 285 years to reach Number.MAX_SAFE_INTEGER even if we call next 1 million times per second so I think it is okay to do so. By the way, if the seed is somehow greater than this, I assigned the seed to a random integer between 0 and this limit. Is it okay ?

mdumandag

comment created time in 3 days

Pull request review commenthazelcast/hazelcast-nodejs-client

[WIP] Client 4.0

  */  /* tslint:disable:no-bitwise */-/*- Client Message is the carrier framed data as defined below.- Any request parameter, response or event data will be carried in the payload.- 0                   1                   2                   3- 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+- |R|                      Frame Length                           |- +-------------+---------------+---------------------------------+- |  Version    |B|E|  Flags    |               Type              |- +-------------+---------------+---------------------------------+- |                       CorrelationId                           |- |                                                               |- +---------------------------------------------------------------+- |                        PartitionId                            |- +-----------------------------+---------------------------------+- |        Data Offset          |                                 |- +-----------------------------+                                 |- |                      Message Payload Data                    ...- |                                                              ...- */- import {Buffer} from 'safe-buffer'; import * as Long from 'long'; import {BitsUtil} from './BitsUtil';-import {Data} from './serialization/Data';-import {HeapData} from './serialization/HeapData';+import {ClientConnection} from './network/ClientConnection'; -class ClientMessage {+export const MESSAGE_TYPE_OFFSET = 0;

We had to expose PARTITION_ID_OFFSET and RESPONSE_BACKUP_ACKS_OFFSET since they are used in the initial frame size calculations in the codecs. I removed export from other offsets.

mdumandag

comment created time in 3 days

push eventmdumandag/hazelcast-nodejs-client

mdumandag

commit sha 5bf114c4c7df400d0a1f25928013fffca4c230e7

set partition id and message type using the client message methods

view details

push time in 3 days

Pull request review commenthazelcast/hazelcast-nodejs-client

[WIP] Client 4.0

 The following are example configurations.  ```javascript var clientConfig = new Config.ClientConfig();-clientConfig.networkConfig.connectionAttemptLimit = 5;+clientConfig.clusterName = 'hzCluster';++clientConfig.networkConfig.cloudConfig.enabled = true;+clientConfig.networkConfig.cloudConfig.discoveryToken = 'EXAMPLE_TOKEN'; ``` -Its default value is `2`.+To be able to connect to the provided IP addresses, you should use secure TLS/SSL connection between the client and members. Therefore, you should set an SSL configuration as described in the previous section.++# 6. Client Connection Strategy -## 5.6. Setting Connection Attempt Period+Node.js client can be configured to connect to a cluster in an async manner during the client start and reconnecting+after a cluster disconnect. Both of these options are configured via `ClientConnectionStrategyConfig`. -Connection attempt period is the duration in milliseconds between the connection attempts defined by `ClientNetworkConfig.connectionAttemptLimit`.+You can configure the client’s starting mode as async or sync using the configuration element `asyncStart`.+When it is set to `true` (async), the behavior of `Client.newHazelcastClient()` call changes.+It resolves a client instance without waiting to establish a cluster connection. In this case, the client rejects+any network dependent operation with `ClientOfflineError` immediately until it connects to the cluster. If it is `false`,+the call is not resolved and the client is not created until a connection with the cluster is established.+Its default value is `false` (sync). -The following are example configurations.+You can also configure how the client reconnects to the cluster after a disconnection. This is configured using the+configuration element `reconnectMode`; it has three options:++* `OFF`:  Client rejects to reconnect to the cluster and triggers the shutdown process.+* `ON`: Client opens a connection to the cluster in a blocking manner by not resolving any of the waiting invocations.

I think in all clients, we try to keep the naming of things consistent with the Java client.

mdumandag

comment created time in 3 days

Pull request review commenthazelcast/hazelcast-nodejs-client

[WIP] Client 4.0

 export default class HazelcastClient {      */     getDistributedObjects(): Promise<DistributedObject[]> {         const clientMessage = ClientGetDistributedObjectsCodec.encodeRequest();-        const toObjectFunc = this.getSerializationService().toObject.bind(this);         const proxyManager = this.proxyManager;-        return this.invocationService.invokeOnRandomTarget(clientMessage).then(function (resp): any {-            const response = ClientGetDistributedObjectsCodec.decodeResponse(resp, toObjectFunc).response;-            return response.map((objectInfo: { [key: string]: any }) => {-                return proxyManager.getOrCreateProxy(objectInfo.value, objectInfo.key, false).value();+        return this.invocationService.invokeOnRandomTarget(clientMessage)+            .then((resp) => {+                const response = ClientGetDistributedObjectsCodec.decodeResponse(resp).response;+                return response.map((objectInfo) => {+                    // TODO value throws if the returned promise from the getOrCreate is not fullfiled yet.+                    //  This needs to be fixed. Also, we should create local instances instead of making remote calls.+                    return proxyManager.getOrCreateProxy(objectInfo.name, objectInfo.serviceName, false).value();

I added the removal logic to this method and related tests

mdumandag

comment created time in 3 days

Pull request review commenthazelcast/hazelcast-nodejs-client

[WIP] Client 4.0

 hz.getReliableTopic('my-distributed-topic').then(function (t) {  Hazelcast Reliable Topic uses `MessageListener` to listen to the events that occur when a message is received. See the [Message Listener section](#7524-message-listener) for information on how to create a message listener object and register it. -## 7.4.9. Using Lock

Done. If it is okay, I will do the renaming after merging this PR

mdumandag

comment created time in 3 days

Pull request review commenthazelcast/hazelcast-nodejs-client

[WIP] Client 4.0

  * limitations under the License.  */ -import {ClientConnection} from './ClientConnection';+import {ClientConnection} from '../network/ClientConnection'; import * as Promise from 'bluebird';-import {ClientAddMembershipListenerCodec} from '../codec/ClientAddMembershipListenerCodec'; import {Member} from '../core/Member';-import {LoggingService} from '../logging/LoggingService'; import {ClientInfo} from '../ClientInfo'; import HazelcastClient from '../HazelcastClient';-import {IllegalStateError} from '../HazelcastError';-import * as assert from 'assert';+import {IllegalStateError, TargetDisconnectedError} from '../HazelcastError'; import {MemberSelector} from '../core/MemberSelector';-import {AddressHelper, DeferredPromise} from '../Util';-import {MemberAttributeEvent, MemberAttributeOperationType} from '../core/MemberAttributeEvent';+import {assertNotNull, DeferredPromise} from '../Util'; import {MembershipListener} from '../core/MembershipListener'; import {MembershipEvent} from '../core/MembershipEvent'; import {UuidUtil} from '../util/UuidUtil'; import {ILogger} from '../logging/ILogger';-import Address = require('../Address');-import ClientMessage = require('../ClientMessage');+import {UUID} from '../core/UUID';+import {ClientConnectionManager} from '../network/ClientConnectionManager';+import {InitialMembershipListener} from '../core/InitialMembershipListener';+import {InitialMembershipEvent} from '../core/InitialMembershipEvent';+import {MemberInfo} from '../core/MemberInfo';+import {Cluster} from '../core/Cluster';  export enum MemberEvent {     ADDED = 1,     REMOVED = 2, } -/**- * Manages the relationship of this client with the cluster.- */-export class ClusterService {+class MemberListSnapshot {+    version: number;+    members: Map<string, Member>; -    /**-     * The unique identifier of the owner server node. This node is responsible for resource cleanup-     */-    public ownerUuid: string = null;+    constructor(version: number, members: Map<string, Member>) {+        this.version = version;+        this.members = members;+    }+} -    /**-     * The unique identifier of this client instance. Assigned by owner node on authentication-     */-    public uuid: string = null;+const EMPTY_SNAPSHOT = new MemberListSnapshot(-1, new Map<string, Member>());+const INITIAL_MEMBERS_TIMEOUT_SECONDS = 120; -    private knownAddresses: Address[] = [];-    private members: Member[] = [];+/**+ * Manages the relationship of this client with the cluster.+ */+export class ClusterService implements Cluster {     private client: HazelcastClient;-    private ownerConnection: ClientConnection;-    private membershipListeners: Map<string, MembershipListener> = new Map();+    private memberListSnapshot: MemberListSnapshot = EMPTY_SNAPSHOT;+    private listeners: Map<string, MembershipListener> = new Map();     private logger: ILogger;+    private initialListFetched = DeferredPromise<void>();+    private connectionManager: ClientConnectionManager;+    private readonly labels: Set<string>;      constructor(client: HazelcastClient) {         this.client = client;-        this.logger = this.client.getLoggingService().getLogger();-        this.members = [];+        this.labels = new Set(client.getConfig().labels);+        this.logger = client.getLoggingService().getLogger();+        this.connectionManager = client.getConnectionManager();     }      /**-     * Starts cluster service.-     * @returns+     * Gets the member with the given UUID.+     *+     * @param uuid The UUID of the member.+     * @return The member that was found, or undefined if not found.      */-    start(): Promise<void> {-        this.initHeartbeatListener();-        this.initConnectionListener();-        return this.connectToCluster();+    public getMember(uuid: UUID): Member {+        assertNotNull(uuid);+        return this.memberListSnapshot.members.get(uuid.toString());     }      /**-     * Connects to cluster. It uses the addresses provided in the configuration.-     * @returns+     * Gets the collection of members.+     *+     * @return The collection of members.      */-    connectToCluster(): Promise<void> {-        return this.getPossibleMemberAddresses().then((res) => {-            this.knownAddresses = [];-            res.forEach((value) => {-                this.knownAddresses = this.knownAddresses.concat(AddressHelper.getSocketAddresses(value));-            });--            const attemptLimit = this.client.getConfig().networkConfig.connectionAttemptLimit;-            const attemptPeriod = this.client.getConfig().networkConfig.connectionAttemptPeriod;-            return this.tryConnectingToAddresses(0, attemptLimit, attemptPeriod);-        });-    }--    getPossibleMemberAddresses(): Promise<string[]> {-        const addresses: Set<string> = new Set();--        this.getMembers().forEach(function (member): void {-            addresses.add(member.address.toString());-        });--        let providerAddresses: Set<string> = new Set();-        const promises: Array<Promise<void>> = [];-        this.client.getConnectionManager().addressProviders.forEach((addressProvider) => {-            promises.push(addressProvider.loadAddresses().then((res) => {-                providerAddresses = new Set([...Array.from(providerAddresses), ...res]);-            }).catch((err) => {-                this.logger.warn('Error from AddressProvider: ' + addressProvider, err);-            }));-        });-        return Promise.all(promises).then(() => {-            return Array.from(new Set([...Array.from(addresses), ...Array.from(providerAddresses)]));-        });+    public getMemberList(): Member[] {+        return Array.from(this.memberListSnapshot.members.values());     }      /**-     * Returns the list of members in the cluster.-     * @returns+     * Returns a collection of the members that satisfy the given {@link MemberSelector}.+     *+     * @param selector {@link MemberSelector} instance to filter members to return+     * @return members that satisfy the given {@link MemberSelector}.      */-    getMembers(selector?: MemberSelector): Member[] {-        if (selector === undefined) {-            return this.members;-        } else {-            const members: Member[] = [];-            this.members.forEach(function (member): void {-                if (selector.select(member)) {-                    members.push(member);-                }-            });-            return members;-        }-    }--    getMember(uuid: string): Member {-        for (const member of this.members) {-            if (member.uuid === uuid) {-                return member;+    public getMembers(selector: MemberSelector): Member[] {

I removed the public getMemberList and handled the undefined or null case here by returning the non-filtered member list

mdumandag

comment created time in 3 days

Pull request review commenthazelcast/hazelcast-nodejs-client

[WIP] Client 4.0

 export class PartitionService {         return Math.abs(partitionHash) % this.partitionCount;     } -    getPartitionCount(): number {+    /**+     * If partition table is not fetched yet, this method returns zero+     *+     * @return the partition count+     */+    public getPartitionCount(): number {         return this.partitionCount;     }++    /**+     * Resets the partition table to initial state.+     */+    public reset(): void {

This method is used as a part of the Blue/Green failover logic in the Java side. I probably forget to remove it. So, I am removing it.

mdumandag

comment created time in 3 days

push eventmdumandag/hazelcast-nodejs-client

mdumandag

commit sha d8cc2855d1707e28ec115b74a3403594528172c2

address review comments

view details

push time in 3 days

PR opened hazelcast/hazelcast

Reviewers
Add validations to connection retry config and fix sleep time calcula… Team: Client Type: Defect

…tion

There were missing validation checks on the ConnectionRetryConfig. The following checks are added

initialBackoffMillis -> [0, inf) maxBackoffMillis -> [0, inf) multiplier -> [1.0, inf) jitter -> [0.0, 1.0] clusterConnectionTimeoutMillis -> No checks are added. I think negative values for this configuration element are valid as they could represent the configuration for not trying to connect possible addresses more than once. (though 0 could just work for this case)

Also, actualSleepTime calculation was wrong. It was calculating sleep time in range [currentBackoffMilis - jitter * currentBackoffMillis, currentBackoffMillis].

+9 -2

0 comment

2 changed files

pr created time in 7 days

push eventmdumandag/hazelcast

mdumandag

commit sha 1c4832070a9b00bef6da03d609e3d531ced08339

Add validations to connection retry config and fix sleep time calculation There were missing validation checks on the ConnectionRetryConfig. The following checks are added initialBackoffMillis -> [0, inf) maxBackoffMillis -> [0, inf) multiplier -> [1.0, inf) jitter -> [0.0, 1.0] clusterConnectionTimeoutMillis -> No checks are added. I think negative values for this configuration element are valid as they could represent the configuration for not trying to connect possible addresses more than once. (though `0` could just work for this case) Also, `actualSleepTime` calculation was wrong. It was calculating sleep time in range [currentBackoffMilis - jitter * currentBackoffMillis, currentBackoffMillis].

view details

push time in 7 days

push eventmdumandag/hazelcast

alparslanavci

commit sha 7fbc394254bd2ed4df0376c0f442aeae36baa145

Added hazelcast-azure to hazelcast-all and updated the configs

view details

Vladimir Ozerov

commit sha 8f4a099f80ac8cc5bfedf04ab658b5c840a29e09

SQL: Initial integration with Apache Calcite (#16980)

view details

Peter Veentjer

commit sha a7e7defde2086b20a3af93e6d59ff4fadfd06ae4

Introduced thread affinity

view details

Peter Veentjer

commit sha 661ca7500ea14b36f9366720bb09a0c5c496e641

Fix NPE in MultiMapService.getStats

view details

Peter Veentjer

commit sha a1496221a028be974eeb0285f157c92a1cc9fb73

Merge pull request #16971 from pveentjer/v4.1/performance/thread-affinity Introduction of CPU thread affinity

view details

Peter Veentjer

commit sha 6c5ccb7aba156c004f6dbdf2075b188563456ed3

Merge pull request #16997 from pveentjer/v4.1/fix/NPE-MultiMapService-getStats Fix NPE in MultiMapService.getStats

view details

Asım Arslan

commit sha a3b8cf044e5e6fd9cade1b94f117728a27352b3c

ProtocolType enum is removed from protocol custom types and handled as builtin enum handling (as int)

view details

Alparslan Avcı

commit sha 7b0873a06cd9bdc750dd6950227ce922c2e69f78

Merge pull request #16982 from alparslanavci/add-azure-to-all Added hazelcast-azure to hazelcast-all

view details

Asım Arslan

commit sha 2fadd59b557a26fa3678e93329902fdd2f4c8224

fix binary compatibility file

view details

Bence Eros

commit sha cfcc57058b7a367894aceccf0ff15f3e713237bd

Merge pull request #17004 from asimarslan/feature/master/protocolType-enum ProtocolType enum is removed from protocol custom types.

view details

Vladimir Ozerov

commit sha 2ff9cf8f33513922c2d5ebabd8319e4e98a2b94d

IMap table metadata resolution (#16984) (#16995)

view details

Matko Medenjak

commit sha d833e5344db94252b6db7f98e982c783e5cb83bb

Minor javadoc fixes from various reviews (#17017)

view details

Matko Medenjak

commit sha 073eea5bf3f6df323c66e9b143d36863878b1472

Decrease preallocated partition container sizes (#16681) Decrease preallocated partition container sizes

view details

push time in 7 days

create barnchmdumandag/hazelcast

branch : fix-connection-retry-config

created branch time in 7 days

push eventmdumandag/hazelcast

Ubuntu

commit sha c66bffb12572ad3b1671503fd78e6b64bd68299d

3.12.1-SNAPSHOT

view details

Mehmet Dogan

commit sha 53b9dada89917b4ea83de6684f6359ab1660fc87

Log a warning for illegal reflective access operation in Java9+ OpenJ9 (cherry picked from commit 29577b09d71ceb9deb0010ee1b7f1ddefb6e12c2)

view details

Peter Veentjer

commit sha cedb1c7aa73b9404c218fa53cbca69b293be89ca

MetricsRegistry unregister fix. NioNetworking can be stopped and started. When this happens, the IOThreads get re-registered and will overwrite the old probes. This causes logging noise. This PR fixes this problem by deregistering the IOThreads when NioNetworking is shutdown. (cherry picked from commit 4d9c28e741d3d77ba39021ab06c30191bf52297b)

view details

Matko Medenjak

commit sha b9a06721fdb78a3239841ac5abd35e7e07a49f3b

Merge pull request #14835 from mmedenjak/openj9-module-log-maintenance Log a warning for illegal reflective access operation in Java9+ OpenJ9

view details

Matko Medenjak

commit sha 7a20c66c3ec78cba70c8942fcb673f3297d7b75a

Merge pull request #14837 from mmedenjak/v3.12/fix/NioNetworking-overwriting-probes-restart-backport MetricsRegistry unregister fix.

view details

Mehmet Dogan

commit sha 3969c8d8a1ed6607f4ad1059d56f8753696308f3

Prevent migration operations running before previous finalization completes Normally finalization is scheduled when either `PublishCompletedMigrationsOperation` or a migration operation is executed. But in a small window of time, a `MigrationOperation` can come and start just after `PublishCompletedMigrationsOperation` starts executing. In this case, if completed migrations include a previous migration which belongs to the same partition with `MigrationOperation` and local member was source of that migration and if `MigrationOperation` starts its execution before the `FinalizeMigrationOperation` is put into the partition operation threads queue, then `FinalizeMigrationOperation` can run after the `MigrationOperation` and remove data replicated by it. To fix that, `MigrationOperation` is retried if it cannot set `migrating` flag of a partition. `migrating` flag is set by migration operations and cleared by `FinalizeMigrationOperation`. So, if `migrating` flag is set while `MigrationOperation` is executed, that means former `FinalizeMigrationOperation` is not executed yet. (cherry picked from commit 50606167516d8962707073ded6b3162ad4899e0c)

view details

Mehmet Dogan

commit sha b718ba0b651b72c51654b6414d428f24cf19cbf9

Merge pull request #14834 from mdogan/migration-finalization-race-fix-z Prevent migration operations running before previous finalization completes

view details

Peter Veentjer

commit sha 905d34f1bb551f0d77271c4fcde7cc0752f0e133

Fixed wakeup bug in NioOutboundPipeline (cherry picked from commit 4d8d27437c114adf5b34e3bbacd3eccace4c1316)

view details

Matko Medenjak

commit sha 604fa7fb2654d41f6cb7f0e75419f8dce34bf43b

Merge pull request #14841 from mmedenjak/v3.12/fix/wakeup-bug-NioOutboundPipeline-maintenance Fixed wakeup bug in NioOutboundPipeline

view details

sancar

commit sha 76e4d247d0c63b287e8618148db9d9cfa3568546

Let Client handle restarted members with same uuid different address In hotrestart feature, members will preserve their uuid's, but can start with different addresses. Client was relying on only uuid, and when a member restarted, it was assuming nothing has changed. When it does not change its local membership view in this case, it is trying to continue with old address which is wrong. Also for these cases, we were not firing any member removed,added events. With this fix, for a restarted member with different address we will wire removed,added events. Note that, if member comes back with same address and uuid, from client point of view there is no way to detect the restart. So no events will be fired in that case. fixes https://github.com/hazelcast/hazelcast/issues/14839 (cherry picked from commit b639fd350efa0d22d3aa44129ee2a1f00bee7ea3)

view details

Serdar Ozmen

commit sha 9d3a27dced20cb7746b1954f3cd0734a1a574509

Syncing with 3.12 branch.

view details

sancar

commit sha 8aa219beab14efc4188d0caf7797f196fe7c9b0a

Merge pull request #14843 from sancar/fix/hotRestartMembership/maint Let Client handle restarted members with same uuid different address

view details

Matko Medenjak

commit sha aaeb6783e4c278d139b0d7ec9edafd84eb40bfc9

Revert write through (#14849)

view details

Serdar Ozmen

commit sha 3466244878796697f8dcf5fa3c4dad8a2f701bbd

Syncing with master.

view details

Josef Cacek

commit sha c535bc4fb44da5ac8c1174ef68f1d103f1245380

Update hazelcast-wm dependency to the latest one.

view details

Emin Demirci

commit sha 3ca4a142660b381f2bc5593a571cc78a10007aec

Add getter method for properties which will be used by the implementations

view details

Emin Demirci

commit sha 71adc51dbb8ec1f7df71d893c3cdffa0cfce9423

Merge pull request #14884 from eminn/yaml-maint [BACKPORT] Added getter method for yaml config builder properties

view details

Rafał Leszko

commit sha 0ec6351ec8d15659ff6c0ce2952ad978018d96dd

Update hazelcast-kubernetes dependency to 1.5 (#14899)

view details

Peter Veentjer

commit sha 6f8ce0a7f245d3bfa9e71329e8eca7a6fead04a1

Restored NioNetworking diagnostics (#14874) Someone by accident has removed this line and therefor we don't get NioNetworking level metrics. #14755 Backport of #14873

view details

sancar

commit sha c33d92bee44111e2e763b4986e736be9b50ab58b

Restart members in parallel and do cluster.shutdown() for hotrestart fixes https://github.com/hazelcast/hazelcast-enterprise/issues/2899 (cherry picked from commit 7df7ddfff6687242dfa9ea1accf91497bd1e8f95)

view details

push time in 7 days

Pull request review commenthazelcast/hazelcast-nodejs-client

[WIP] Client 4.0

 The following are example configurations.  ```javascript var clientConfig = new Config.ClientConfig();-clientConfig.networkConfig.connectionAttemptPeriod = 5000;-```--Its default value is `3000` milliseconds. -## 5.7. Enabling Client TLS/SSL--You can use TLS/SSL to secure the connection between the clients and members. If you want to enable TLS/SSL-for the client-cluster connection, you should set an SSL configuration. Please see [TLS/SSL section](#61-tlsssl).--As explained in the [TLS/SSL section](#61-tlsssl), Hazelcast members have key stores used to identify themselves (to other members) and Hazelcast Node.js clients have certificate authorities used to define which members they can trust. Hazelcast has the mutual authentication feature which allows the Node.js clients also to have their private keys and public certificates, and members to have their certificate authorities so that the members can know which clients they can trust. See the [Mutual Authentication section](#613-mutual-authentication).+clientConfig.connectionStrategyConfig.asyncStart = false;+clientConfig.connectionStrategyConfig.reconnectMode = Config.ReconnectMode.ON;+``` -## 5.8. Enabling Hazelcast Cloud Discovery+## 6.1. Configuring Client Connection Retry -The purpose of Hazelcast Cloud Discovery is to provide the clients to use IP addresses provided by `hazelcast orchestrator`. To enable Hazelcast Cloud Discovery, specify a token for the `discoveryToken` field and set the `enabled` field to `true`.+When client is disconnected from the cluster, it searches for new connections to reconnect.+You can configure the frequency of the reconnection attempts and client shutdown behavior using+`ConnectionRetryConfig` (programmatical approach) or `connectionRetry` element (declarative approach). -The following are example configurations.+Below are the example configurations for each.  **Declarative Configuration:**  ```json {- "group": {-        "name": "hazel",-        "password": "cast"-    },--    "network": {-        "hazelcastCloud": {-            "discoveryToken": "EXAMPLE_TOKEN",-            "enabled": true+    "connectionStrategy": {+        "asyncStart": false,+        "reconnectMode": "ON",+        "connectionRetry": {+            "initialBackoffMillis": 1000,+            "maxBackoffMillis": 60000,+            "multiplier": 2,+            "clusterConnectTimeoutMillis": 50000,+            "jitter": 0.2         }     } }- ```  **Programmatic Configuration:**  ```javascript var clientConfig = new Config.ClientConfig();-clientConfig.groupConfig.name = 'hazel';-clientConfig.groupConfig.password = 'cast';+var connectionRetryConfig = new Config.ConnectionRetryConfig();+connectionRetryConfig.initialBackoffMillis = 1000;+connectionRetryConfig.maxBackoffMillis = 60000;+connectionRetryConfig.multiplier = 2;+connectionRetryConfig.clusterConnectTimeoutMillis = 50000;+connectionRetryConfig.jitter = 0.2; -clientConfig.networkConfig.cloudConfig.enabled = true;-clientConfig.networkConfig.cloudConfig.discoveryToken = 'EXAMPLE_TOKEN';+clientConfig.connectionStrategyConfig.connectionRetryConfig = connectionRetryConfig; ``` -To be able to connect to the provided IP addresses, you should use secure TLS/SSL connection between the client and members. Therefore, you should set an SSL configuration as described in the previous section.+The following are configuration element descriptions: -# 6. Securing Client Connection+* `initialBackoffMillis`: Specifies how long to wait (backoff), in milliseconds, after the first failure before retrying. Its default value is 1000 ms.+* `maxBackoffMillis`: Specifies the upper limit for the backoff in milliseconds. Its default value is 30000 ms.+* `multiplier`: Factor to multiply the backoff after a failed retry. Its default value is 1.+* `clusterConnectTimeoutMillis`: Timeout value in milliseconds for the client to give up to connect to the current cluster Its default value is 20000.+* `jitter`: Specifies by how much to randomize backoffs. Its default value is 0.

By the way, I have a general question. Since there is a range limit for this property, what should we do for the programmatic configuration of these kinds of elements ?. Right now, we are not using setters for most of the config elements and user can set them to any value he wants which may result in failures. Should we switch all of them to setters ?

mdumandag

comment created time in 7 days

Pull request review commenthazelcast/hazelcast-nodejs-client

[WIP] Client 4.0

 The following are example configurations.  ```javascript var clientConfig = new Config.ClientConfig();-clientConfig.networkConfig.connectionAttemptPeriod = 5000;-```--Its default value is `3000` milliseconds. -## 5.7. Enabling Client TLS/SSL--You can use TLS/SSL to secure the connection between the clients and members. If you want to enable TLS/SSL-for the client-cluster connection, you should set an SSL configuration. Please see [TLS/SSL section](#61-tlsssl).--As explained in the [TLS/SSL section](#61-tlsssl), Hazelcast members have key stores used to identify themselves (to other members) and Hazelcast Node.js clients have certificate authorities used to define which members they can trust. Hazelcast has the mutual authentication feature which allows the Node.js clients also to have their private keys and public certificates, and members to have their certificate authorities so that the members can know which clients they can trust. See the [Mutual Authentication section](#613-mutual-authentication).+clientConfig.connectionStrategyConfig.asyncStart = false;+clientConfig.connectionStrategyConfig.reconnectMode = Config.ReconnectMode.ON;+``` -## 5.8. Enabling Hazelcast Cloud Discovery+## 6.1. Configuring Client Connection Retry -The purpose of Hazelcast Cloud Discovery is to provide the clients to use IP addresses provided by `hazelcast orchestrator`. To enable Hazelcast Cloud Discovery, specify a token for the `discoveryToken` field and set the `enabled` field to `true`.+When client is disconnected from the cluster, it searches for new connections to reconnect.+You can configure the frequency of the reconnection attempts and client shutdown behavior using+`ConnectionRetryConfig` (programmatical approach) or `connectionRetry` element (declarative approach). -The following are example configurations.+Below are the example configurations for each.  **Declarative Configuration:**  ```json {- "group": {-        "name": "hazel",-        "password": "cast"-    },--    "network": {-        "hazelcastCloud": {-            "discoveryToken": "EXAMPLE_TOKEN",-            "enabled": true+    "connectionStrategy": {+        "asyncStart": false,+        "reconnectMode": "ON",+        "connectionRetry": {+            "initialBackoffMillis": 1000,+            "maxBackoffMillis": 60000,+            "multiplier": 2,+            "clusterConnectTimeoutMillis": 50000,+            "jitter": 0.2         }     } }- ```  **Programmatic Configuration:**  ```javascript var clientConfig = new Config.ClientConfig();-clientConfig.groupConfig.name = 'hazel';-clientConfig.groupConfig.password = 'cast';+var connectionRetryConfig = new Config.ConnectionRetryConfig();+connectionRetryConfig.initialBackoffMillis = 1000;+connectionRetryConfig.maxBackoffMillis = 60000;+connectionRetryConfig.multiplier = 2;+connectionRetryConfig.clusterConnectTimeoutMillis = 50000;+connectionRetryConfig.jitter = 0.2; -clientConfig.networkConfig.cloudConfig.enabled = true;-clientConfig.networkConfig.cloudConfig.discoveryToken = 'EXAMPLE_TOKEN';+clientConfig.connectionStrategyConfig.connectionRetryConfig = connectionRetryConfig; ``` -To be able to connect to the provided IP addresses, you should use secure TLS/SSL connection between the client and members. Therefore, you should set an SSL configuration as described in the previous section.+The following are configuration element descriptions: -# 6. Securing Client Connection+* `initialBackoffMillis`: Specifies how long to wait (backoff), in milliseconds, after the first failure before retrying. Its default value is 1000 ms.+* `maxBackoffMillis`: Specifies the upper limit for the backoff in milliseconds. Its default value is 30000 ms.+* `multiplier`: Factor to multiply the backoff after a failed retry. Its default value is 1.+* `clusterConnectTimeoutMillis`: Timeout value in milliseconds for the client to give up to connect to the current cluster Its default value is 20000.+* `jitter`: Specifies by how much to randomize backoffs. Its default value is 0.

Yes you are right, it should be in that range. Also, there is a bug in the implementation of backoff time with jitter in the java side (and in this PR). I will send a fix for both of them.

mdumandag

comment created time in 7 days

Pull request review commenthazelcast/hazelcast-nodejs-client

[WIP] Client 4.0

 Client.newHazelcastClient().then(function (client) { }); ``` -When the keys are the same, entries are stored on the same member. However, we sometimes want to have the related entries stored on the same member, such as a customer and his/her order entries. We would have a customers map with `customerId` as the key and an orders map with `orderId` as the key. Since `customerId` and `orderId` are different keys, a customer and his/her orders may fall into different members in your cluster. So how can we have them stored on the same member? We create an affinity between the customer and orders. If we make them part of the same partition then these entries will be co-located. We achieve this by making `OrderKey`s `PartitionAware`.+When the keys are the same, entries are stored on the same member. However, we sometimes want to have the related entries stored on the same member, such as a customer and his/her order entries. We would have a customers map with `customerId` as the key and an orders map with `orderId` as the key. Since `customerId` and `orderId` are different keys, a customer and his/her orders may fall into different members in your cluster. So how can we have them stored on the same member? We create an affinity between the customer and orders. If we make them part of the same partition then these entries will be co-located. We achieve this by making `OrderKey`s `PartitionAware`.++```javascript+function OrderKey(orderId, customerId) {+    this.orderId = orderId;+    this.customerId = customerId;+}++OrderKey.prototype.getPartitionKey = function () {+    return this.customerId;+};+```++Notice that `OrderKey` implements `PartitionAware` interface and that `getPartitionKey()` returns the `customerId`. This will make sure that the `Customer` entry and its `Order`s will be stored on the same member.++```javascript+var hazelcastClient;+var mapCustomers;+var mapOrders;++Client.newHazelcastClient().then(function (client) {+    hazelcastClient = client;+    return hazelcastClient.getMap('customers')+}).then(function (mp) {+    mapCustomers = mp;+    return hazelcastClient.getMap('orders');+}).then(function (mp) {+    mapOrders = mp;++    // create the customer entry with customer id = 1+    return mapCustomers.put(1, customer);+}).then(function () {+    // now create the orders for this customer+    return mapOrders.putAll([+        [new OrderKey(21, 1), order],+        [new OrderKey(22, 1), order],+        [new OrderKey(23, 1), order]+    ]);+});+```++For more details, see the [PartitionAware section](https://docs.hazelcast.org/docs/latest/manual/html-single/#partitionaware) in the Hazelcast IMDG Reference Manual.++### 7.8.2. Near Cache++Map entries in Hazelcast are partitioned across the cluster members. Hazelcast clients do not have local data at all. Suppose you read the key `k` a number of times from a Hazelcast client and `k` is owned by a member in your cluster. Then each `map.get(k)` will be a remote operation, which creates a lot of network trips. If you have a map that is mostly read, then you should consider creating a local Near Cache, so that reads are sped up and less network traffic is created.++These benefits do not come for free, please consider the following trade-offs:++- If invalidation is enabled and entries are updated frequently, then invalidations will be costly.++- Near Cache breaks the strong consistency guarantees; you might be reading stale data.++- Clients with a Near Cache will have to hold the extra cached data, which increases memory consumption.++Near Cache is highly recommended for maps that are mostly read.++#### 7.8.2.1. Configuring Near Cache++The following snippets show how a Near Cache is configured in the Node.js client, presenting all available values for each element:++**Declarative Configuration:**++```+{+    "nearCaches": [+        {+            "name": "mostlyReadMap",+            "invalidateOnChange": (false|true),+            "timeToLiveSeconds": (0..Number.MAX_SAFE_INTEGER),+            "maxIdleSeconds": (0..Number.MAX_SAFE_INTEGER),+            "inMemoryFormat": "(object|binary)",+            "evictionPolicy": "lru|lfu|random|none",+            "evictionMaxSize": (0..Number.MAX_SAFE_INTEGER),+            "evictionSamplingCount": (0..Number.MAX_SAFE_INTEGER),+            "evictionSamplingPoolSize": (0..Number.MAX_SAFE_INTEGER),+        }+    ]+}+```++**Programmatic Configuration:**++```javascript+var nearCacheConfig = new Config.NearCacheConfig();+nearCacheConfig.name = 'mostlyReadMap';+nearCacheConfig.invalidateOnChange = (false|true);+nearCacheConfig.timeToLiveSeconds = (0..Number.MAX_SAFE_INTEGER);+nearCacheConfig.maxIdleSeconds = (0..Number.MAX_SAFE_INTEGER);+nearCacheConfig.inMemoryFormat= (InMemoryFormat.OBJECT|InMemoryFormat.BINARY);+nearCacheConfig.evictionPolicy = (EvictionPolicy.LRU|EvictionPolicy.LFU|EvictionPolicy.RANDOM|EvictionPolicy.NONE);+nearCacheConfig.evictionMaxSize = (0..Number.MAX_SAFE_INTEGER);+nearCacheConfig.evictionSamplingCount = (0..Number.MAX_SAFE_INTEGER);+nearCacheConfig.evictionSamplingPoolSize = (0..Number.MAX_SAFE_INTEGER);++cfg.nearCacheConfigs['mostlyReadMap'] = nearCacheConfig;+```++Following are the descriptions of all configuration elements:++- `inMemoryFormat`: Specifies in which format data will be stored in your Near Cache. Note that a map’s in-memory format can be different from that of its Near Cache. Available values are as follows:+  - `BINARY`: Data will be stored in serialized binary format (default value).+  - `OBJECT`: Data will be stored in deserialized form.++- `invalidateOnChange`: Specifies whether the cached entries are evicted when the entries are updated or removed in members. Its default value is true.++- `timeToLiveSeconds`: Maximum number of seconds for each entry to stay in the Near Cache. Entries that are older than this period are automatically evicted from the Near Cache. Regardless of the eviction policy used, `timeToLiveSeconds` still applies. Any integer between 0 and `Number.MAX_SAFE_INTEGER`. 0 means infinite. Its default value is 0.++- `maxIdleSeconds`: Maximum number of seconds each entry can stay in the Near Cache as untouched (not read). Entries that are not read more than this period are removed from the Near Cache. Any integer between 0 and `Number.MAX_SAFE_INTEGER`. 0 means infinite. Its default value is 0.++- `evictionPolicy`: Eviction policy configuration. Available values are as follows:+  - `LRU`: Least Recently Used (default value).+  - `LFU`: Least Frequently Used.+  - `NONE`: No items are evicted and the `evictionMaxSize` property is ignored. You still can combine it with `timeToLiveSeconds` and `maxIdleSeconds` to evict items from the Near Cache.+  - `RANDOM`: A random item is evicted.++- `evictionMaxSize`: Maximum number of entries kept in the memory before eviction kicks in.+- `evictionSamplingCount`: Number of random entries that are evaluated to see if some of them are already expired. If there are expired entries, those are removed and there is no need for eviction.+- `evictionSamplingPoolSize`: Size of the pool for eviction candidates. The pool is kept sorted according to eviction policy. The entry with the highest score is evicted.++#### 7.8.2.2. Near Cache Example for Map++The following is an example configuration for a Near Cache defined in the `mostlyReadMap` map. According to this configuration, the entries are stored as `OBJECT`'s in this Near Cache and eviction starts when the count of entries reaches `5000`; entries are evicted based on the `LRU` (Least Recently Used) policy. In addition, when an entry is updated or removed on the member side, it is eventually evicted on the client side.++**Declarative Configuration:**++```+{+    "nearCaches": [+        {+            "name": "mostlyReadMap",+            "inMemoryFormat": "object",+            "invalidateOnChange": true,+            "evictionPolicy": "lru",+            "evictionMaxSize": 5000,+        }+    ]+}+```++**Programmatic Configuration:**++```javascript+var nearCacheConfig = new Config.NearCacheConfig();+nearCacheConfig.name = "mostlyReadMap";+nearCacheConfig.inMemoryFormat= InMemoryFormat.OBJECT;+nearCacheConfig.invalidateOnChange = true;+nearCacheConfig.evictionPolicy = EvictionPolicy.LRU;+nearCacheConfig.evictionMaxSize = 5000;++cfg.nearCacheConfigs['mostlyReadMap'] = nearCacheConfig;+```++#### 7.8.2.3. Near Cache Eviction++In the scope of Near Cache, eviction means evicting (clearing) the entries selected according to the given `evictionPolicy` when the specified `evictionMaxSize` has been reached.++The `evictionMaxSize` defines the entry count when the Near Cache is full and determines whether the eviction should be triggered.++Once the eviction is triggered the configured `evictionPolicy` determines which, if any, entries must be evicted.++#### 7.8.2.4. Near Cache Expiration++Expiration means the eviction of expired records. A record is expired:++- if it is not touched (accessed/read) for `maxIdleSeconds`++- `timeToLiveSeconds` passed since it is put to Near Cache++The actual expiration is performed when a record is accessed: it is checked if the record is expired or not. If it is expired, it is evicted and `undefined` is returned as the value to the caller.+++#### 7.8.2.5. Near Cache Invalidation++Invalidation is the process of removing an entry from the Near Cache when its value is updated or it is removed from the original map (to prevent stale reads). See the [Near Cache Invalidation section](https://docs.hazelcast.org/docs/latest/manual/html-single/#near-cache-invalidation) in the Hazelcast IMDG Reference Manual.++#### 7.8.2.6. Near Cache Eventual Consistency++Near Caches are invalidated by invalidation events. Invalidation events can be lost due to the fire-and-forget fashion of eventing system. If an event is lost, reads from Near Cache can indefinitely be stale.++To solve this problem, Hazelcast provides eventually consistent behavior for Map Near Caches by detecting invalidation losses. After detection of an invalidation loss, stale data will be made unreachable and Near Cache’s `get` calls to that data will be directed to underlying Map to fetch the fresh data.++You can configure eventual consistency with the `ClientConfig.properties` below:++- `hazelcast.invalidation.max.tolerated.miss.count`: Default value is `10`. If missed invalidation count is bigger than this value, relevant cached data will be made unreachable.++- `hazelcast.invalidation.reconciliation.interval.seconds`: Default value is `60` seconds. This is a periodic task that scans cluster members periodically to compare generated invalidation events with the received ones from the client Near Cache.++### 7.8.3. Automated Pipelining++Hazelcast Node.js client performs automated pipelining of operations. It means that the library pushes all operations into an internal queue and tries to send them in batches. This reduces the count of executed `Socket.write()` calls and significantly improves throughtput for read operations.++You can configure automated operation pipelining with the `ClientConfig.properties` below:++- `hazelcast.client.autopipelining.enabled`: Default value is `true`. Turns automated pipelining feature on/off. If your application does only writes operations, like `IMap.set()`, you can try disabling automated pipelining to get a slightly better throughtput.++- `hazelcast.client.autopipelining.threshold.bytes`: Default value is `8192` bytes. This is the coalescing threshold for the internal queue used by automated pipelining. Once the total size of operation payloads taken from the queue reaches this value during batch preparation, these operations are written to the socket. Notice that automated pipelining will still send operations if their total size is smaller than the threshold and there are no more operations in the internal queue.++## 7.9. Monitoring and Logging++### 7.9.1. Enabling Client Statistics++You can monitor your clients using Hazelcast Management Center.++As a prerequisite, you need to enable the client statistics before starting your clients. This can be done by setting the `hazelcast.client.statistics.enabled` system property to `true` on the **member** as the following:++```xml+<hazelcast>+    ...+    <properties>+        <property name="hazelcast.client.statistics.enabled">true</property>+    </properties>+    ...+</hazelcast>+```++Also, you need to enable the client statistics in the Node.js client. There are two properties related to client statistics:++- `hazelcast.client.statistics.enabled`: If set to `true`, it enables collecting the client statistics and sending them to the cluster. When it is `true` you can monitor the clients that are connected to your Hazelcast cluster, using Hazelcast Management Center. Its default value is `false`.++- `hazelcast.client.statistics.period.seconds`: Period in seconds the client statistics are collected and sent to the cluster. Its default value is `3`.++You can enable client statistics and set a non-default period in seconds as follows:++**Declarative Configuration:**++```json+{+    "properties": {+        "hazelcast.client.statistics.enabled": true,+        "hazelcast.client.statistics.period.seconds": 4+    }+}+```++**Programmatic Configuration:**++```javascript+var config = new Config.ClientConfig();+config.properties['hazelcast.client.statistics.enabled'] = true;+config.properties['hazelcast.client.statistics.period.seconds'] = 4;+```++After enabling the client statistics, you can monitor your clients using Hazelcast Management Center. Please refer to the [Monitoring Clients section](https://docs.hazelcast.org/docs/management-center/latest/manual/html/index.html#monitoring-clients) in the Hazelcast Management Center Reference Manual for more information on the client statistics. -```javascript-function OrderKey(orderId, customerId) {-    this.orderId = orderId;-    this.customerId = customerId;-}+### 7.9.2. Logging Configuration -OrderKey.prototype.getPartitionKey = function () {-    return this.customerId;-};+ By default, Hazelcast Node.js client uses a default logger which logs to the `stdout` with the `INFO` log level. You can change the log level using the `'hazelcast.logging.level'` property of the `ClientConfig.properties`.++Below is an example of the logging configuration with the `OFF` log level which disables logging.++```javascript+cfg.properties['hazelcast.logging.level'] = LogLevel.OFF; ``` -Notice that `OrderKey` implements `PartitionAware` interface and that `getPartitionKey()` returns the `customerId`. This will make sure that the `Customer` entry and its `Order`s will be stored on the same member.+ You can also implement a custom logger depending on your needs. Your custom logger must have `log`, `error`, `warn`, `info`, `debug`, `trace` methods. After implementing it, you can use your custom logger using the `customLogger` property of `ClientConfig`++See the following for a custom logger example.  ```javascript-var hazelcastClient;-var mapCustomers;-var mapOrders;+var winstonAdapter = {+    logger: new (winston.Logger)({+        transports: [+            new (winston.transports.Console)()+        ]+    }), -Client.newHazelcastClient().then(function (client) {-    hazelcastClient = client;-    return hazelcastClient.getMap('customers')-}).then(function (mp) {-    mapCustomers = mp;-    return hazelcastClient.getMap('orders');-}).then(function (mp) {-    mapOrders = mp;+    levels: [+        'error',+        'warn',+        'info',+        'debug',+        'silly'+    ], -    // create the customer entry with customer id = 1-    return mapCustomers.put(1, customer);-}).then(function () {-    // now create the orders for this customer-    return mapOrders.putAll([-        [new OrderKey(21, 1), order],-        [new OrderKey(22, 1), order],-        [new OrderKey(23, 1), order]-    ]);-});-```+    log: function (level, objectName, message, furtherInfo) {+        this.logger.log(this.levels[level], objectName + ': ' + message, furtherInfo);+    }, -For more details, see the [PartitionAware section](https://docs.hazelcast.org/docs/latest/manual/html-single/#partitionaware) in the Hazelcast IMDG Reference Manual.+    error: function (objectName, message, furtherInfo) {+        this.log(LogLevel.ERROR, objectName, message, furtherInfo);+    }, -### 7.8.2. Near Cache+    warn: function (objectName, message, furtherInfo) {+        this.log(LogLevel.WARN, objectName, message, furtherInfo);+    }, -Map entries in Hazelcast are partitioned across the cluster members. Hazelcast clients do not have local data at all. Suppose you read the key `k` a number of times from a Hazelcast client and `k` is owned by a member in your cluster. Then each `map.get(k)` will be a remote operation, which creates a lot of network trips. If you have a map that is mostly read, then you should consider creating a local Near Cache, so that reads are sped up and less network traffic is created.+    info: function (objectName, message, furtherInfo) {+        this.log(LogLevel.INFO, objectName, message, furtherInfo);+    }, -These benefits do not come for free, please consider the following trade-offs:+    debug: function (objectName, message, furtherInfo) {+        this.log(LogLevel.DEBUG, objectName, message, furtherInfo);+    }, -- If invalidation is enabled and entries are updated frequently, then invalidations will be costly.+    trace: function (objectName, message, furtherInfo) {+        this.log(LogLevel.TRACE, objectName, message, furtherInfo);+    } -- Near Cache breaks the strong consistency guarantees; you might be reading stale data.+};+cfg.customLogger = winstonAdapter;+``` -- Clients with a Near Cache will have to hold the extra cached data, which increases memory consumption.+Note that it is not possible to configure custom logging via declarative configuration. -Near Cache is highly recommended for maps that are mostly read.+## 7.10. Defining Client Labels

I think the labels and instance name are different concepts so I added a new section for that(7.11)

mdumandag

comment created time in 7 days

Pull request review commenthazelcast/hazelcast-nodejs-client

[WIP] Client 4.0

 The following are example configurations.  ```javascript var clientConfig = new Config.ClientConfig();-clientConfig.networkConfig.connectionAttemptPeriod = 5000;-```--Its default value is `3000` milliseconds. -## 5.7. Enabling Client TLS/SSL--You can use TLS/SSL to secure the connection between the clients and members. If you want to enable TLS/SSL-for the client-cluster connection, you should set an SSL configuration. Please see [TLS/SSL section](#61-tlsssl).--As explained in the [TLS/SSL section](#61-tlsssl), Hazelcast members have key stores used to identify themselves (to other members) and Hazelcast Node.js clients have certificate authorities used to define which members they can trust. Hazelcast has the mutual authentication feature which allows the Node.js clients also to have their private keys and public certificates, and members to have their certificate authorities so that the members can know which clients they can trust. See the [Mutual Authentication section](#613-mutual-authentication).+clientConfig.connectionStrategyConfig.asyncStart = false;+clientConfig.connectionStrategyConfig.reconnectMode = Config.ReconnectMode.ON;+``` -## 5.8. Enabling Hazelcast Cloud Discovery+## 6.1. Configuring Client Connection Retry -The purpose of Hazelcast Cloud Discovery is to provide the clients to use IP addresses provided by `hazelcast orchestrator`. To enable Hazelcast Cloud Discovery, specify a token for the `discoveryToken` field and set the `enabled` field to `true`.+When client is disconnected from the cluster, it searches for new connections to reconnect.+You can configure the frequency of the reconnection attempts and client shutdown behavior using+`ConnectionRetryConfig` (programmatical approach) or `connectionRetry` element (declarative approach). -The following are example configurations.+Below are the example configurations for each.  **Declarative Configuration:**  ```json {- "group": {-        "name": "hazel",-        "password": "cast"-    },--    "network": {-        "hazelcastCloud": {-            "discoveryToken": "EXAMPLE_TOKEN",-            "enabled": true+    "connectionStrategy": {+        "asyncStart": false,+        "reconnectMode": "ON",+        "connectionRetry": {+            "initialBackoffMillis": 1000,+            "maxBackoffMillis": 60000,+            "multiplier": 2,+            "clusterConnectTimeoutMillis": 50000,+            "jitter": 0.2         }     } }- ```  **Programmatic Configuration:**  ```javascript var clientConfig = new Config.ClientConfig();-clientConfig.groupConfig.name = 'hazel';-clientConfig.groupConfig.password = 'cast';+var connectionRetryConfig = new Config.ConnectionRetryConfig();+connectionRetryConfig.initialBackoffMillis = 1000;+connectionRetryConfig.maxBackoffMillis = 60000;+connectionRetryConfig.multiplier = 2;+connectionRetryConfig.clusterConnectTimeoutMillis = 50000;+connectionRetryConfig.jitter = 0.2; -clientConfig.networkConfig.cloudConfig.enabled = true;-clientConfig.networkConfig.cloudConfig.discoveryToken = 'EXAMPLE_TOKEN';+clientConfig.connectionStrategyConfig.connectionRetryConfig = connectionRetryConfig; ``` -To be able to connect to the provided IP addresses, you should use secure TLS/SSL connection between the client and members. Therefore, you should set an SSL configuration as described in the previous section.+The following are configuration element descriptions: -# 6. Securing Client Connection+* `initialBackoffMillis`: Specifies how long to wait (backoff), in milliseconds, after the first failure before retrying. Its default value is 1000 ms.+* `maxBackoffMillis`: Specifies the upper limit for the backoff in milliseconds. Its default value is 30000 ms.+* `multiplier`: Factor to multiply the backoff after a failed retry. Its default value is 1.+* `clusterConnectTimeoutMillis`: Timeout value in milliseconds for the client to give up to connect to the current cluster Its default value is 20000.+* `jitter`: Specifies by how much to randomize backoffs. Its default value is 0.

Looking at the Java side and implementation, I think there is no range limit for jitter. It is used as a multiplicand while determining the actual sleep time. It is probably not sane to set it more than 1.0 but I didn't see a limiting factor for it.

mdumandag

comment created time in 7 days

Pull request review commenthazelcast/hazelcast-nodejs-client

[WIP] Client 4.0

 The following are example configurations.  ```javascript var clientConfig = new Config.ClientConfig();-clientConfig.networkConfig.connectionAttemptPeriod = 5000;-```--Its default value is `3000` milliseconds. -## 5.7. Enabling Client TLS/SSL--You can use TLS/SSL to secure the connection between the clients and members. If you want to enable TLS/SSL-for the client-cluster connection, you should set an SSL configuration. Please see [TLS/SSL section](#61-tlsssl).--As explained in the [TLS/SSL section](#61-tlsssl), Hazelcast members have key stores used to identify themselves (to other members) and Hazelcast Node.js clients have certificate authorities used to define which members they can trust. Hazelcast has the mutual authentication feature which allows the Node.js clients also to have their private keys and public certificates, and members to have their certificate authorities so that the members can know which clients they can trust. See the [Mutual Authentication section](#613-mutual-authentication).+clientConfig.connectionStrategyConfig.asyncStart = false;+clientConfig.connectionStrategyConfig.reconnectMode = Config.ReconnectMode.ON;+``` -## 5.8. Enabling Hazelcast Cloud Discovery+## 6.1. Configuring Client Connection Retry -The purpose of Hazelcast Cloud Discovery is to provide the clients to use IP addresses provided by `hazelcast orchestrator`. To enable Hazelcast Cloud Discovery, specify a token for the `discoveryToken` field and set the `enabled` field to `true`.+When client is disconnected from the cluster, it searches for new connections to reconnect.+You can configure the frequency of the reconnection attempts and client shutdown behavior using+`ConnectionRetryConfig` (programmatical approach) or `connectionRetry` element (declarative approach). -The following are example configurations.+Below are the example configurations for each.  **Declarative Configuration:**  ```json {- "group": {-        "name": "hazel",-        "password": "cast"-    },--    "network": {-        "hazelcastCloud": {-            "discoveryToken": "EXAMPLE_TOKEN",-            "enabled": true+    "connectionStrategy": {+        "asyncStart": false,+        "reconnectMode": "ON",+        "connectionRetry": {+            "initialBackoffMillis": 1000,+            "maxBackoffMillis": 60000,+            "multiplier": 2,+            "clusterConnectTimeoutMillis": 50000,+            "jitter": 0.2         }     } }- ```  **Programmatic Configuration:**  ```javascript var clientConfig = new Config.ClientConfig();-clientConfig.groupConfig.name = 'hazel';-clientConfig.groupConfig.password = 'cast';+var connectionRetryConfig = new Config.ConnectionRetryConfig();+connectionRetryConfig.initialBackoffMillis = 1000;+connectionRetryConfig.maxBackoffMillis = 60000;+connectionRetryConfig.multiplier = 2;+connectionRetryConfig.clusterConnectTimeoutMillis = 50000;+connectionRetryConfig.jitter = 0.2; -clientConfig.networkConfig.cloudConfig.enabled = true;-clientConfig.networkConfig.cloudConfig.discoveryToken = 'EXAMPLE_TOKEN';+clientConfig.connectionStrategyConfig.connectionRetryConfig = connectionRetryConfig; ``` -To be able to connect to the provided IP addresses, you should use secure TLS/SSL connection between the client and members. Therefore, you should set an SSL configuration as described in the previous section.+The following are configuration element descriptions: -# 6. Securing Client Connection+* `initialBackoffMillis`: Specifies how long to wait (backoff), in milliseconds, after the first failure before retrying. Its default value is 1000 ms.+* `maxBackoffMillis`: Specifies the upper limit for the backoff in milliseconds. Its default value is 30000 ms.+* `multiplier`: Factor to multiply the backoff after a failed retry. Its default value is 1.+* `clusterConnectTimeoutMillis`: Timeout value in milliseconds for the client to give up to connect to the current cluster Its default value is 20000.+* `jitter`: Specifies by how much to randomize backoffs. Its default value is 0. -This chapter describes the security features of Hazelcast Node.js client. These include using TLS/SSL for connections between members and between clients and members, mutual authentication and credentials. These security features require **Hazelcast IMDG Enterprise** edition.+A pseudo-code is as follows: -### 6.1. TLS/SSL+```text+begin_time = getCurrentTime()+current_backoff_millis = INITIAL_BACKOFF_MILLIS+while (tryConnect(connectionTimeout)) != SUCCESS) {+    if (getCurrentTime() - begin_time >= CLUSTER_CONNECT_TIMEOUT_MILLIS) {+        // Give up to connecting to the current cluster and switch to another if exists.+    }+    Sleep(current_backoff_millis + UniformRandom(-JITTER * current_backoff_millis, JITTER * current_backoff_millis))+    current_backoff = Min(current_backoff_millis * MULTIPLIER, MAX_BACKOFF_MILLIS)+}+``` -One of the offers of Hazelcast is the TLS/SSL protocol which you can use to establish an encrypted communication across your cluster with key stores and trust stores.+Note that, `tryConnect` above tries to connect to any member that the client knows, and for each connection we+have a connection timeout; see the [Setting Connection Timeout](#54-setting-connection-timeout) section. -* A Java `keyStore` is a file that includes a private key and a public certificate. The equivalent of a key store is the combination of `key` and `cert` files at the Node.js client side.-* A Java `trustStore` is a file that includes a list of certificates trusted by your application which is named as  "certificate authority". The equivalent of a trust store is a `ca` file at the Node.js client side.+# 7. Using Node.js Client with Hazelcast IMDG -You should set `keyStore` and `trustStore` before starting the members. See the next section on setting `keyStore` and `trustStore` on the server side.+This chapter provides information on how you can use Hazelcast IMDG's data structures in the Node.js client, after giving some basic information including an overview to the client API, operation modes of the client and how it handles the failures. -#### 6.1.1. TLS/SSL for Hazelcast Members+## 7.1. Node.js Client API Overview -Hazelcast allows you to encrypt socket level communication between Hazelcast members and between Hazelcast clients and members, for end to end encryption. To use it, see the [TLS/SSL for Hazelcast Members section](http://docs.hazelcast.org/docs/latest/manual/html-single/index.html#tls-ssl-for-hazelcast-members).+Most of the functions in the API return `Promise`. Therefore, you need to be familiar with the concept of promises to use the Node.js client. If not, you can learn about them using various online resources, e.g., the [Promise JS](https://www.promisejs.org/) website. -#### 6.1.2. TLS/SSL for Hazelcast Node.js Clients+Promises provide a better way of working with callbacks. You can chain asynchronous functions by the `then()` function of promise. Also, you can use `async/await`, if you use Node.js 8 and higher versions. -TLS/SSL for the Hazelcast Node.js client can be configured using the `SSLConfig` class. In order to turn it on, `enabled` property of `SSLConfig` should be set to `true`:+If you are ready to go, let's start to use Hazelcast Node.js client. -```javascript-var fs = require('fs');+The first step is the configuration. You can configure the Node.js client declaratively or programmatically. We will use the programmatic approach throughout this chapter. See the [Programmatic Configuration section](#311-programmatic-configuration) for details.++The following is an example on how to create a `ClientConfig` object and configure it programmatically: +```javascript var clientConfig = new Config.ClientConfig();-var sslConfig = new Config.SSLConfig();-sslConfig.enabled = true;-clientConfig.networkConfig.sslConfig = sslConfig;+clientConfig.clusterName = 'dev';+clientConfig.networkConfig.addresses.push('10.90.0.1', '10.90.0.2:5702'); ``` -`SSLConfig` object takes various SSL options defined in the [Node.js TLS Documentation](https://nodejs.org/api/tls.html#tls_tls_connect_options_callback). You can set your custom options-object to `sslConfig.sslOptions`.--#### 6.1.3. Mutual Authentication+The second step is initializing the `HazelcastClient` to be connected to the cluster: -As explained above, Hazelcast members have key stores used to identify themselves (to other members) and Hazelcast clients have trust stores used to define which members they can trust.+```javascript+Client.newHazelcastClient(clientConfig).then(function (client) {+    // some operation+});+``` -Using mutual authentication, the clients also have their key stores and members have their trust stores so that the members can know which clients they can trust.+**This client object is your gateway to access all the Hazelcast distributed objects.** -To enable mutual authentication, firstly, you need to set the following property on the server side in the `hazelcast.xml` file:+Let's create a map and populate it with some data, as shown below. -```xml-<network>-    <ssl enabled="true">-        <properties>-            <property name="javax.net.ssl.mutualAuthentication">REQUIRED</property>-        </properties>-    </ssl>-</network>+```javascript+var map;+// Get the Distributed Map from Cluster.+client.getMap('my-distributed-map').then(function (mp) {+    map = mp;+    // Standard Put and Get.+    return map.put('key', 'value');+}).then(function () {+    return map.get('key');+}).then(function (val) {+    // Concurrent Map methods, optimistic updating+    return map.putIfAbsent('somekey', 'somevalue');+}).then(function () {+    return map.replace('key', 'value', 'newvalue');+}); ``` -You can see the details of setting mutual authentication on the server side in the [Mutual Authentication section](https://docs.hazelcast.org/docs/latest/manual/html-single/index.html#mutual-authentication) of the Hazelcast IMDG Reference Manual.+As the final step, if you are done with your client, you can shut it down as shown below. This will release all the used resources and close connections to the cluster. -At the Node.js client side, you need to supply an SSL `options` object to pass to-[`tls.connect`](https://nodejs.org/api/tls.html#tls_tls_connect_options_callback) of Node.js.+```javascript+...+.then(function () {+    client.shutdown();+});+``` -There are two ways to provide this object to the client:+## 7.2. Node.js Client Operation Modes -1. Using the built-in `BasicSSLOptionsFactory` bundled with the client.-2. Writing an `SSLOptionsFactory`.+The client has two operation modes because of the distributed nature of the data and cluster: smart and unisocket. -Below subsections describe each way.+### 7.2.1. Smart Client -**Using the Built-in `BasicSSLOptionsFactory`**+In the smart mode, the clients connect to each cluster member. Since each data partition uses the well known and consistent hashing algorithm, each client can send an operation to the relevant cluster member, which increases the overall throughput and efficiency. Smart mode is the default mode. -Hazelcast Node.js client includes a utility factory class that creates the necessary `options` object out of the supplied-properties. All you need to do is to specify your factory as `BasicSSLOptionsFactory` and provide the following options:+### 7.2.2. Unisocket Client

Added a link to the section which describes how configure this setting to the parent section (7.2). Is it okay for you ?

mdumandag

comment created time in 7 days

Pull request review commenthazelcast/hazelcast-nodejs-client

[WIP] Client 4.0

 clientConfig.networkConfig.connectionTimeout = 6000;  Its default value is `5000` milliseconds. -## 5.5. Setting Connection Attempt Limit+## 5.5. Enabling Client TLS/SSL++You can use TLS/SSL to secure the connection between the clients and members. If you want to enable TLS/SSL+for the client-cluster connection, you should set an SSL configuration. Please see [TLS/SSL section](#81-tlsssl).++As explained in the [TLS/SSL section](#81-tlsssl), Hazelcast members have key stores used to identify themselves (to other members) and Hazelcast Node.js clients have certificate authorities used to define which members they can trust. Hazelcast has the mutual authentication feature which allows the Node.js clients also to have their private keys and public certificates, and members to have their certificate authorities so that the members can know which clients they can trust. See the [Mutual Authentication section](#813-mutual-authentication). -While the client is trying to connect initially to one of the members in the `ClientNetworkConfig.addresses`, that member might not be available at that moment. Instead of giving up, throwing an error and stopping the client, the client will retry as many as `ClientNetworkConfig.connectionAttemptLimit` times. This is also the case when the previously established connection between the client and that member goes down.+## 5.6. Enabling Hazelcast Cloud Discovery++The purpose of Hazelcast Cloud Discovery is to provide the clients to use IP addresses provided by `hazelcast orchestrator`. To enable Hazelcast Cloud Discovery, specify a token for the `discoveryToken` field and set the `enabled` field to `true`.

It was written like this before, so I don't know the reason but as you said, I aggree on not having it in ticks.

mdumandag

comment created time in 7 days

Pull request review commenthazelcast/hazelcast-nodejs-client

[WIP] Client 4.0

 hz.getReliableTopic('my-distributed-topic').then(function (t) {  Hazelcast Reliable Topic uses `MessageListener` to listen to the events that occur when a message is received. See the [Message Listener section](#7524-message-listener) for information on how to create a message listener object and register it. -## 7.4.9. Using Lock

Got it, added a new section about them

mdumandag

comment created time in 7 days

push eventmdumandag/hazelcast-nodejs-client

mdumandag

commit sha 2142c6258343f23b98579b98f7955d4298d9d7b5

address documentation comments

view details

push time in 7 days

delete branch mdumandag/hazelcast-nodejs-client

delete branch : missing-defaults

delete time in 7 days

push eventhazelcast/hazelcast-nodejs-client

Metin Dumandag

commit sha 5947d05efbab55f39c4fe4f7617720e73889738d

Add missing properties and schemas (#536) There were a few missing properties in the hazelcast-client-default.json along with some non-valid values. Also, hazelcastCloud element was missing from the configuration validation schema.

view details

push time in 7 days

PR merged hazelcast/hazelcast-nodejs-client

Add missing properties and schemas Type: Defect

There were a few missing properties in the hazelcast-client-default.json along with some non-valid values. Also, hazelcastCloud element was missing from the configuration validation schema.

+42 -6

1 comment

2 changed files

mdumandag

pr closed time in 7 days

pull request commenthazelcast/hazelcast-nodejs-client

Add missing properties and schemas

verify

mdumandag

comment created time in 10 days

push eventmdumandag/hazelcast-nodejs-client

mdumandag

commit sha b5f5d7268003299b34ce55c0272c9fbe22f5d570

update default config and config schema

view details

push time in 10 days

PR opened hazelcast/hazelcast-nodejs-client

Reviewers
Add missing properties and schemas Type: Defect

There were a few missing properties in the hazelcast-client-default.json along with some non-valid values. Also, hazelcastCloud element was missing from the configuration validation schema.

+42 -6

0 comment

2 changed files

pr created time in 10 days

push eventmdumandag/hazelcast-nodejs-client

mdumandag

commit sha 2dcc7e121490d91b2b1916bf78131755c63e1d82

Add missing properties and schemas There were a few missing properties in the hazelcast-client-default.json along with some non-valid values. Also, hazelcastCloud element was missing from the configuration validation schema.

view details

push time in 10 days

create barnchmdumandag/hazelcast-nodejs-client

branch : missing-defaults

created branch time in 10 days

push eventmdumandag/hazelcast-nodejs-client

mdumandag

commit sha 29a610b4c8350aae8c922b68baa93782959f1821

add documentation and test for client labels

view details

push time in 10 days

push eventmdumandag/hazelcast-nodejs-client

Metin Dumandag

commit sha 19bae10d97953f2db555b917091af755a42ee897

update test

view details

push time in 13 days

push eventmdumandag/hazelcast-nodejs-client

Metin Dumandag

commit sha b523a28bcb5498cdb984dd645a79e5e5084c9677

update test

view details

push time in 13 days

push eventmdumandag/hazelcast-nodejs-client

Metin Dumandag

commit sha 5cd3eb9e696e72aa6405bbf12d427afe708e0ba7

update test

view details

push time in 13 days

push eventmdumandag/hazelcast-nodejs-client

mdumandag

commit sha ccb26c449ddb8b81c834251a07a18ec03dedb996

add initial membership listener test

view details

push time in 13 days

push eventmdumandag/hazelcast-nodejs-client

mdumandag

commit sha c2be5ffe63e77816fba3f84e046a6c56f23d5dea

more documentation updates

view details

push time in 13 days

push eventmdumandag/hazelcast-nodejs-client

mdumandag

commit sha 9159c074c7c270c7784ce7b9fe3c83da0fee66e2

update documentation

view details

push time in 13 days

pull request commenthazelcast/hazelcast-nodejs-client

[WIP] Client 4.0

verify

mdumandag

comment created time in 14 days

Pull request review commenthazelcast/hazelcast-nodejs-client

[WIP] Client 4.0

 export default class HazelcastClient {      */     getDistributedObjects(): Promise<DistributedObject[]> {         const clientMessage = ClientGetDistributedObjectsCodec.encodeRequest();-        const toObjectFunc = this.getSerializationService().toObject.bind(this);         const proxyManager = this.proxyManager;-        return this.invocationService.invokeOnRandomTarget(clientMessage).then(function (resp): any {-            const response = ClientGetDistributedObjectsCodec.decodeResponse(resp, toObjectFunc).response;-            return response.map((objectInfo: { [key: string]: any }) => {-                return proxyManager.getOrCreateProxy(objectInfo.value, objectInfo.key, false).value();+        return this.invocationService.invokeOnRandomTarget(clientMessage)+            .then((resp) => {+                const response = ClientGetDistributedObjectsCodec.decodeResponse(resp).response;+                return response.map((objectInfo) => {+                    // TODO value throws if the returned promise from the getOrCreate is not fullfiled yet.+                    //  This needs to be fixed. Also, we should create local instances instead of making remote calls.+                    return proxyManager.getOrCreateProxy(objectInfo.name, objectInfo.serviceName, false).value();

I agree on the return type. Also, the Java client destroys the local instances of the destroyed proxies as a side effect. Added a TODO about this missing effect on this client. What do you think about it ?

mdumandag

comment created time in 14 days

Pull request review commenthazelcast/hazelcast-nodejs-client

[WIP] Client 4.0

 export class ClusterDataFactoryHelper {     static readonly FACTORY_ID = 0;     static readonly ADDRESS_ID = 1;-    static readonly VECTOR_CLOCK = 43;+    static readonly VECTOR_CLOCK = 40;

Yes it is not used, I removed it.

mdumandag

comment created time in 14 days

Pull request review commenthazelcast/hazelcast-nodejs-client

[WIP] Client 4.0

  */  /* tslint:disable:no-bitwise */-/*- Client Message is the carrier framed data as defined below.- Any request parameter, response or event data will be carried in the payload.- 0                   1                   2                   3- 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1- +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+- |R|                      Frame Length                           |- +-------------+---------------+---------------------------------+- |  Version    |B|E|  Flags    |               Type              |- +-------------+---------------+---------------------------------+- |                       CorrelationId                           |- |                                                               |- +---------------------------------------------------------------+- |                        PartitionId                            |- +-----------------------------+---------------------------------+- |        Data Offset          |                                 |- +-----------------------------+                                 |- |                      Message Payload Data                    ...- |                                                              ...- */- import {Buffer} from 'safe-buffer'; import * as Long from 'long'; import {BitsUtil} from './BitsUtil';-import {Data} from './serialization/Data';-import {HeapData} from './serialization/HeapData';--class ClientMessage {--    private buffer: Buffer;-    private cursor: number = BitsUtil.HEADER_SIZE;-    private isRetryable: boolean;--    constructor(buffer: Buffer) {-        this.buffer = buffer;-    }--    public static newClientMessage(payloadSize: number): ClientMessage {-        const totalSize = BitsUtil.HEADER_SIZE + payloadSize;-        const buffer = Buffer.allocUnsafe(totalSize);-        const message = new ClientMessage(buffer);-        message.setDataOffset(BitsUtil.HEADER_SIZE);-        message.setVersion(BitsUtil.VERSION);-        message.setFrameLength(totalSize);-        message.setFlags(0xc0);-        message.setPartitionId(-1);-        return message;-    }--    clone(): ClientMessage {-        const message = new ClientMessage(Buffer.from(this.buffer));-        message.isRetryable = this.isRetryable;-        return message;-    }--    getBuffer(): Buffer {-        return this.buffer;-    }--    getCorrelationId(): number {-        const offset = BitsUtil.CORRELATION_ID_FIELD_OFFSET;-        return this.readLongInternal(offset).toNumber();+import {ClientConnection} from './network/ClientConnection';++export const MESSAGE_TYPE_OFFSET = 0;+export const CORRELATION_ID_OFFSET = MESSAGE_TYPE_OFFSET + BitsUtil.INT_SIZE_IN_BYTES;+export const RESPONSE_BACKUP_ACKS_OFFSET = CORRELATION_ID_OFFSET + BitsUtil.LONG_SIZE_IN_BYTES;+export const PARTITION_ID_OFFSET = CORRELATION_ID_OFFSET + BitsUtil.LONG_SIZE_IN_BYTES;+export const FRAGMENTATION_ID_OFFSET = 0;++export const DEFAULT_FLAGS = 0;+export const BEGIN_FRAGMENT_FLAG = 1 << 15;+export const END_FRAGMENT_FLAG = 1 << 14;+export const UNFRAGMENTED_MESSAGE = BEGIN_FRAGMENT_FLAG | END_FRAGMENT_FLAG;+export const IS_FINAL_FLAG = 1 << 13;+export const BEGIN_DATA_STRUCTURE_FLAG = 1 << 12;+export const END_DATA_STRUCTURE_FLAG = 1 << 11;+export const IS_NULL_FLAG = 1 << 10;+export const IS_EVENT_FLAG = 1 << 9;+export const BACKUP_AWARE_FLAG = 1 << 8;+export const BACKUP_EVENT_FLAG = 1 << 7;++export const SIZE_OF_FRAME_LENGTH_AND_FLAGS = BitsUtil.INT_SIZE_IN_BYTES + BitsUtil.SHORT_SIZE_IN_BYTES;++export class Frame {+    content: Buffer;+    flags: number;+    next: Frame;++    constructor(content: Buffer, flags?: number) {+        this.content = content;+        if (flags) {+            this.flags = flags;+        } else {+            this.flags = DEFAULT_FLAGS;+        }     } -    setCorrelationId(value: number): void {-        this.writeLongInternal(value, BitsUtil.CORRELATION_ID_FIELD_OFFSET);+    getLength(): number {+        return SIZE_OF_FRAME_LENGTH_AND_FLAGS + this.content.length;     } -    getPartitionId(): number {-        return this.buffer.readInt32LE(BitsUtil.PARTITION_ID_FIELD_OFFSET);+    copy(): Frame {+        const frame = new Frame(this.content, this.flags);+        frame.next = this.next;+        return frame;     } -    setPartitionId(value: number): void {-        this.buffer.writeInt32LE(value, BitsUtil.PARTITION_ID_FIELD_OFFSET);+    deepCopy(): Frame {+        const content = Buffer.from(this.content);+        const frame = new Frame(content, this.flags);+        frame.next = this.next;+        return frame;     } -    setVersion(value: number): void {-        this.buffer.writeUInt8(value, BitsUtil.VERSION_FIELD_OFFSET);+    isBeginFrame(): boolean {+        return ClientMessage.isFlagSet(this.flags, BEGIN_DATA_STRUCTURE_FLAG);     } -    getMessageType(): number {-        return this.buffer.readUInt16LE(BitsUtil.TYPE_FIELD_OFFSET);+    isEndFrame(): boolean {+        return ClientMessage.isFlagSet(this.flags, END_DATA_STRUCTURE_FLAG);     } -    setMessageType(value: number): void {-        this.buffer.writeUInt16LE(value, BitsUtil.TYPE_FIELD_OFFSET);+    isNullFrame(): boolean {+        return ClientMessage.isFlagSet(this.flags, IS_NULL_FLAG);     }+} -    getFlags(): number {-        return this.buffer.readUInt8(BitsUtil.FLAGS_FIELD_OFFSET);-    }+export const NULL_FRAME = new Frame(Buffer.allocUnsafe(0), IS_NULL_FLAG);+export const BEGIN_FRAME = new Frame(Buffer.allocUnsafe(0), BEGIN_DATA_STRUCTURE_FLAG);+export const END_FRAME = new  Frame(Buffer.allocUnsafe(0), END_DATA_STRUCTURE_FLAG); -    setFlags(value: number): void {-        this.buffer.writeUInt8(value, BitsUtil.FLAGS_FIELD_OFFSET);+export class ForwardFrameIterator {+    private nextFrame: Frame;+    constructor(startFrame: Frame) {+        this.nextFrame = startFrame;     } -    hasFlags(flags: number): number {-        return this.getFlags() & flags;+    next(): Frame {+        const result = this.nextFrame;+        if (this.nextFrame != null) {+            this.nextFrame = this.nextFrame.next;+        }+        return result;     } -    getFrameLength(): number {-        return this.buffer.readInt32LE(BitsUtil.FRAME_LENGTH_FIELD_OFFSET);+    hasNext(): boolean {+        return this.nextFrame !== null;     } -    setFrameLength(value: number): void {-        this.buffer.writeInt32LE(value, BitsUtil.FRAME_LENGTH_FIELD_OFFSET);+    peekNext(): Frame {+        return this.nextFrame;     }+} -    getDataOffset(): number {-        return this.buffer.readInt16LE(BitsUtil.DATA_OFFSET_FIELD_OFFSET);-    }+export class ClientMessage {+    startFrame: Frame;+    endFrame: Frame;+    private retryable: boolean;+    private connection: ClientConnection; -    setDataOffset(value: number): void {-        this.buffer.writeInt16LE(value, BitsUtil.DATA_OFFSET_FIELD_OFFSET);+    private constructor(startFrame?: Frame, endFrame?: Frame) {+        this.startFrame = startFrame;+        this.endFrame = endFrame || startFrame;     } -    setRetryable(value: boolean): void {-        this.isRetryable = value;+    static createForEncode(): ClientMessage {+        return new ClientMessage();     } -    appendByte(value: number): void {-        this.buffer.writeUInt8(value, this.cursor);-        this.cursor += BitsUtil.BYTE_SIZE_IN_BYTES;+    static createForDecode(startFrame: Frame): ClientMessage {+        return new ClientMessage(startFrame);     } -    appendBoolean(value: boolean): void {-        return this.appendByte(value ? 1 : 0);+    static isFlagSet(flags: number, flagMask: number): boolean {+        const i = flags & flagMask;+        return i === flagMask;     } -    appendInt32(value: number): void {-        this.buffer.writeInt32LE(value, this.cursor);-        this.cursor += BitsUtil.INT_SIZE_IN_BYTES;+    getStartFrame(): Frame {+        return this.startFrame;     } -    appendUint8(value: number): void {-        this.buffer.writeUInt8(value, this.cursor);-        this.cursor += BitsUtil.BYTE_SIZE_IN_BYTES;-    }+    add(frame: Frame): void {+        frame.next = null;+        if (this.startFrame == null) {+            this.startFrame = frame;+            this.endFrame = frame;+            return;+        } -    appendLong(value: any): void {-        this.writeLongInternal(value, this.cursor);-        this.cursor += BitsUtil.LONG_SIZE_IN_BYTES;+        this.endFrame.next = frame;+        this.endFrame = frame;     } -    appendString(value: string): void {-        const length = Buffer.byteLength(value, 'utf8');-        this.buffer.writeInt32LE(length, this.cursor);-        this.cursor += 4;-        this.buffer.write(value, this.cursor);-        this.cursor += length;+    frameIterator(): ForwardFrameIterator {

Great idea. Changed the codecs as you suggested

mdumandag

comment created time in 14 days

Pull request review commenthazelcast/hazelcast-nodejs-client

[WIP] Client 4.0

  * limitations under the License.  */ -import Address = require('./Address');+import {Address} from './Address';+import {UUID} from './core/UUID';  export class ClientInfo {

Added name

mdumandag

comment created time in 14 days

Pull request review commenthazelcast/hazelcast-nodejs-client

[WIP] Client 4.0

         "hazelcast.client.statistics.period.seconds": 3,         "hazelcast.invalidation.reconciliation.interval.seconds": 60,         "hazelcast.invalidation.max.tolerated.miss.count": 10,-        "hazelcast.invalidation.min.reconciliation.interval.seconds": 30+        "hazelcast.invalidation.min.reconciliation.interval.seconds": 30,+        "hazelcast.logging.level": 2,+        "hazelcast.client.autopipelining.enabled": true,

I will do it

mdumandag

comment created time in 14 days

Pull request review commenthazelcast/hazelcast-nodejs-client

[WIP] Client 4.0

  * limitations under the License.  */ -var Client = require('hazelcast-client').Client;-// Start the Hazelcast Client and connect to an already running Hazelcast Cluster on 127.0.0.1-Client.newHazelcastClient().then(function (hz) {-    var counter;-    // Get an Atomic Counter, we'll call it "counter"-    hz.getAtomicLong('counter').then(function (c) {-        counter = c;-        // Add and Get the "counter"-        return counter.addAndGet(3);-    }).then(function (value) {-        return counter.get();-    }).then(function (value) {-        // Display the "counter" value-        console.log('counter: ' + value);-        // Shutdown this Hazelcast Client-        hz.shutdown();-    });-});+// TODO write CP AtomicLong sample

I remember we had them in the .org site but I couldn't find it, so I removed them.

mdumandag

comment created time in 14 days

Pull request review commenthazelcast/hazelcast-nodejs-client

[WIP] Client 4.0

 hz.getReliableTopic('my-distributed-topic').then(function (t) {  Hazelcast Reliable Topic uses `MessageListener` to listen to the events that occur when a message is received. See the [Message Listener section](#7524-message-listener) for information on how to create a message listener object and register it. -## 7.4.9. Using Lock

CP Subsystem is in the scope of 4.0. I thought we can add the CP versions of the removed sections when we implement it.

mdumandag

comment created time in 14 days

push eventmdumandag/hazelcast-client-protocol

Bence Eros

commit sha b9d11b696ba31878cb0eb0ffac6b6ba26a097889

Supporting MemberInfo#addressMap (#316) * supporting Memberinfo#addressMap * changing "since" attr of newly added members from 2.0 to 2.1 * fixing template : missing whitespace after if (causes checkstyle errors in the generated code) * representing ProtocolType with its ordinal * changing to ProtocolType.WAN in reference_objects

view details

sancar

commit sha 5e8dd1a0831e917123dd6ac6afb1d9d005bd9482

Correct since version as 2.1 for multimap.putall Related wrongly merged pr. https://github.com/hazelcast/hazelcast-client-protocol/pull/313

view details

Asım Arslan

commit sha c27615d7b19f61be17067a377ba0c50f20d7d972

ProtocolType enum is removed from protocol custom types (#324) * ProtocolType enum is removed from protocol custom types and handled as builtin enum handling (as int) * EnpointQualifier default value added

view details

mdumandag

commit sha ff2902b244fdaaeeff4c2e9d1d4297ba995df636

add support for nodejs 4.0 protocol

view details

mdumandag

commit sha 1a5b5f854806a81e3b26ec37d50038f173469223

tslint fixes

view details

mdumandag

commit sha bf8cc99acb67ae93751333528a0b623184800a7c

address review comments

view details

push time in 14 days

push eventmdumandag/hazelcast-nodejs-client

mdumandag

commit sha 22caae715294bc03f90d479181d6fd548941f7a6

address review comments

view details

push time in 14 days

Pull request review commenthazelcast/hazelcast-client-protocol

ProtocolType enum is removed from protocol custom types

 def __init__(self, most_sig_bits, least_sig_bits):     },     'BitmapIndexOptions': {         'uniqueKeyTransformation': 1-    },-    'ProtocolType': {

We should add 'EndpointQualifier': { 'type': 2 } since right now the binary encoder is writing 25 (default value for int) to the binary files but there is no enum for the id of 25.

asimarslan

comment created time in 15 days

push eventmdumandag/hazelcast-nodejs-client

mdumandag

commit sha 9d20a2bdddf497a9d7fdf495d9cbc43045c8c003

Fix incorrent @link and @see usages on comments

view details

push time in 17 days

PR opened hazelcast/hazelcast-nodejs-client

[WIP] Update dependencies

depends on #533

+20058 -15793

0 comment

563 changed files

pr created time in 17 days

push eventmdumandag/hazelcast-nodejs-client

mdumandag

commit sha f470930c257bbebc879a37083b5aa077ab01b763

Fix eslint warnings and errors

view details

push time in 17 days

PR opened hazelcast/hazelcast-client-protocol

Fix Node.js template

Our eslint config does not allows unused variables, so, RESPONSE_PARAMS which is there for debugging purposes is converted into a comment.

Also, since nullable UUIDs are handled inside the FixSizedTypesCodec, there is no need to import CodecUtil in that case.

depends on #321

+597 -28

0 comment

7 changed files

pr created time in 17 days

create barnchmdumandag/hazelcast-client-protocol

branch : update-dependencies

created branch time in 17 days

push eventmdumandag/hazelcast-nodejs-client

mdumandag

commit sha 314332d48cd4ba493993bba11cb474677e5b79a7

Remove dependency on safe-buffer We are going to pick Node.js 8 as the minimum supported version, which contains polyfill methods implemented by the safe-buffer. Hence, there is no need for an extra dependency for these.

view details

mdumandag

commit sha 2f6e0c61b5cd51588e44e851762b73eedaab7367

Update dependencies Updated the dependencies to to latests versions if possible. There are a few exceptions. The istanbul package is not maintened anymore. They recommend migrating to nyc for coverage. The tslint package is also not going to be maintened. Eslint is the most viable alternative for that. However, the latest version of the eslint does not support Node.js 8(v7.0). So, I used the latest version that supports Node.js 8(v6.8).

view details

push time in 17 days

push eventmdumandag/hazelcast-nodejs-client

mdumandag

commit sha d027570f0b8cef079f7ca57d21291989450d5671

change Util#pad param ordering

view details

mdumandag

commit sha 314332d48cd4ba493993bba11cb474677e5b79a7

Remove dependency on safe-buffer We are going to pick Node.js 8 as the minimum supported version, which contains polyfill methods implemented by the safe-buffer. Hence, there is no need for an extra dependency for these.

view details

push time in 17 days

push eventmdumandag/hazelcast-nodejs-client

mdumandag

commit sha d027570f0b8cef079f7ca57d21291989450d5671

change Util#pad param ordering

view details

mdumandag

commit sha 443dd31982eaa350041cc93667e1bc74c5f01b58

Remove dependency on safe-buffer We are going to pick Node.js 8 as the minimum supported version, which contains polyfill methods implemented by the safe-buffer. Hence, there is no need for an extra dependency for these.

view details

mdumandag

commit sha 7672a49ef7c7ec67aa9f19fa6461f7f778e5c7ee

Update dependencies Updated the dependencies to to latests versions if possible. There are a few exceptions. The istanbul package is not maintened anymore. They recommend migrating to nyc for coverage. The tslint package is also not going to be maintened. Eslint is the most viable alternative for that. However, the latest version of the eslint does not support Node.js 8(v7.0). So, I used the latest version that supports Node.js 8(v6.8).

view details

push time in 17 days

push eventmdumandag/hazelcast-nodejs-client

mdumandag

commit sha d027570f0b8cef079f7ca57d21291989450d5671

change Util#pad param ordering

view details

push time in 17 days

push eventmdumandag/hazelcast-nodejs-client

mdumandag

commit sha 7da00db9a39b1520e5d7145a213c5f2f84cb1016

wip

view details

push time in 17 days

Pull request review commenthazelcast/hazelcast-python-client

Initial ReliableTopic implementation: #201

-from hazelcast.proxy.base import Proxy+import time+import threading+from uuid import uuid4 +from hazelcast.config import ReliableTopicConfig, TOPIC_OVERLOAD_POLICY+from hazelcast.exception import IllegalArgumentError, TopicOverflowError, HazelcastInstanceNotActiveError, \+    HazelcastClientNotActiveException, DistributedObjectDestroyedError, StaleSequenceError, OperationTimeoutError+from hazelcast.proxy.base import Proxy, TopicMessage+from hazelcast.proxy.ringbuffer import OVERFLOW_POLICY_FAIL, OVERFLOW_POLICY_OVERWRITE+from hazelcast.serialization.reliable_topic import ReliableTopicMessage+from hazelcast.util import current_time_in_millis+from hazelcast.six.moves import queue -class ReliableTopic(Proxy):-    def add_listener(self, on_message=None):+_INITIAL_BACKOFF = 0.1+_MAX_BACKOFF = 2+++class ReliableMessageListener(object):+    def on_message(self, item):+        """+        Invoked when a message is received for the added reliable topic.++        :param: message the message that is received for the added reliable topic+        """         raise NotImplementedError +    def retrieve_initial_sequence(self):+        """+        Retrieves the initial sequence from which this ReliableMessageListener+        should start.++        Return -1 if there is no initial sequence and you want to start+        from the next published message.++        If you intend to create a durable subscriber so you continue from where+        you stopped the previous time, load the previous sequence and add 1.+        If you don't add one, then you will be receiving the same message twice.++        :return: (int), the initial sequence+        """+        return -1++    def store_sequence(self, sequence):+        """"+        Informs the ReliableMessageListener that it should store the sequence.+        This method is called before the message is processed. Can be used to+        make a durable subscription.++        :param: (int) ``sequence`` the sequence+        """+        pass++    def is_loss_tolerant(self):+        """+        Checks if this ReliableMessageListener is able to deal with message loss.+        Even though the reliable topic promises to be reliable, it can be that a+        MessageListener is too slow. Eventually the message won't be available+        anymore.++        If the ReliableMessageListener is not loss tolerant and the topic detects+        that there are missing messages, it will terminate the+        ReliableMessageListener.++        :return: (bool) ``True`` if the ReliableMessageListener is tolerant towards losing messages.+        """+        return False++    def is_terminal(self):+        """+        Checks if the ReliableMessageListener should be terminated based on an+        exception thrown while calling on_message.++        :return: (bool) ``True` if the ReliableMessageListener should terminate itself, ``False`` if it should keep on running.+        """+        raise False+++class _MessageListener(object):+    def __init__(self, uuid, proxy, to_object, listener):+        self._id = uuid+        self._proxy = proxy+        self._to_object = to_object+        self._listener = listener+        self._cancelled_lock = threading.Lock()+        self._cancelled = False+        self._sequence = 0+        self._q = queue.Queue()++    def start(self):+        tail_seq = self._proxy.ringbuffer.tail_sequence()+        initial_seq = self._listener.retrieve_initial_sequence()+        if initial_seq == -1:+            initial_seq = tail_seq.result() + 1+        self._sequence = initial_seq+        self._proxy.client.reactor.add_timer(0, self._next)++    def _handle_illegal_argument_error(self):+        head_seq = self._proxy.ringbuffer.head_sequence().result()+        self._proxy.logger.warning("MessageListener {} on topic {} requested a too large sequence. Jumping from old "+                                   "sequence: {} to sequence: {}".format(self._id, self._proxy.name, self._sequence,+                                                                         head_seq))+        self._sequence = head_seq+        self._next()++    def _handle_stale_sequence_error(self):+        head_seq = self._proxy.ringbuffer.head_sequence().result()

Same as above

buraksezer

comment created time in 17 days

Pull request review commenthazelcast/hazelcast-python-client

Initial ReliableTopic implementation: #201

-from hazelcast.proxy.base import Proxy+import time+import threading+from uuid import uuid4 +from hazelcast.config import ReliableTopicConfig, TOPIC_OVERLOAD_POLICY+from hazelcast.exception import IllegalArgumentError, TopicOverflowError, HazelcastInstanceNotActiveError, \+    HazelcastClientNotActiveException, DistributedObjectDestroyedError, StaleSequenceError, OperationTimeoutError+from hazelcast.proxy.base import Proxy, TopicMessage+from hazelcast.proxy.ringbuffer import OVERFLOW_POLICY_FAIL, OVERFLOW_POLICY_OVERWRITE+from hazelcast.serialization.reliable_topic import ReliableTopicMessage+from hazelcast.util import current_time_in_millis+from hazelcast.six.moves import queue -class ReliableTopic(Proxy):-    def add_listener(self, on_message=None):+_INITIAL_BACKOFF = 0.1+_MAX_BACKOFF = 2+++class ReliableMessageListener(object):+    def on_message(self, item):+        """+        Invoked when a message is received for the added reliable topic.++        :param: message the message that is received for the added reliable topic+        """         raise NotImplementedError +    def retrieve_initial_sequence(self):+        """+        Retrieves the initial sequence from which this ReliableMessageListener+        should start.++        Return -1 if there is no initial sequence and you want to start+        from the next published message.++        If you intend to create a durable subscriber so you continue from where+        you stopped the previous time, load the previous sequence and add 1.+        If you don't add one, then you will be receiving the same message twice.++        :return: (int), the initial sequence+        """+        return -1++    def store_sequence(self, sequence):+        """"+        Informs the ReliableMessageListener that it should store the sequence.+        This method is called before the message is processed. Can be used to+        make a durable subscription.++        :param: (int) ``sequence`` the sequence+        """+        pass++    def is_loss_tolerant(self):+        """+        Checks if this ReliableMessageListener is able to deal with message loss.+        Even though the reliable topic promises to be reliable, it can be that a+        MessageListener is too slow. Eventually the message won't be available+        anymore.++        If the ReliableMessageListener is not loss tolerant and the topic detects+        that there are missing messages, it will terminate the+        ReliableMessageListener.++        :return: (bool) ``True`` if the ReliableMessageListener is tolerant towards losing messages.+        """+        return False++    def is_terminal(self):+        """+        Checks if the ReliableMessageListener should be terminated based on an+        exception thrown while calling on_message.++        :return: (bool) ``True` if the ReliableMessageListener should terminate itself, ``False`` if it should keep on running.+        """+        raise False+++class _MessageListener(object):+    def __init__(self, uuid, proxy, to_object, listener):+        self._id = uuid+        self._proxy = proxy+        self._to_object = to_object+        self._listener = listener+        self._cancelled_lock = threading.Lock()+        self._cancelled = False+        self._sequence = 0+        self._q = queue.Queue()++    def start(self):+        tail_seq = self._proxy.ringbuffer.tail_sequence()+        initial_seq = self._listener.retrieve_initial_sequence()+        if initial_seq == -1:+            initial_seq = tail_seq.result() + 1+        self._sequence = initial_seq+        self._proxy.client.reactor.add_timer(0, self._next)++    def _handle_illegal_argument_error(self):+        head_seq = self._proxy.ringbuffer.head_sequence().result()+        self._proxy.logger.warning("MessageListener {} on topic {} requested a too large sequence. Jumping from old "+                                   "sequence: {} to sequence: {}".format(self._id, self._proxy.name, self._sequence,+                                                                         head_seq))+        self._sequence = head_seq+        self._next()++    def _handle_stale_sequence_error(self):+        head_seq = self._proxy.ringbuffer.head_sequence().result()+        if self._listener.is_loss_tolerant:+            self._sequence = head_seq+            self._proxy.logger.warning("Topic {} ran into a stale sequence. Jumping from old sequence {} to new "+                                       "sequence {}".format(self._proxy.name, self._sequence, head_seq))+            self._next()+            return True++        self._proxy.logger.warning(+            "Terminating Message Listener: {} on topic: {}. Reason: The listener was too slow or the retention "+            "period of the message has been violated. Head: {}, sequence: {}".format(self._id, self._proxy.name,+                                                                                     head_seq, self._sequence))+        return False++    def _handle_operation_timeout_error(self):+        self._proxy.logger.info("Message Listener ", self._proxy.id, "on topic: ", self._proxy.name, " timed out. " ++                                "Continuing from the last known sequence ", self._proxy.sequence)+        self._next()++    def _handle_exception(self, exception):+        base_msg = "Terminating Message Listener: " + self._id + " on topic: " + self._proxy.name + ". Reason: "+        if isinstance(exception, IllegalArgumentError) and self._listener.is_loss_tolerant():+            self._handle_illegal_argument_error()+            return+        elif isinstance(exception, StaleSequenceError):+            if self._handle_stale_sequence_error():+                return+        elif isinstance(exception, OperationTimeoutError):+            self._handle_operation_timeout_error()+            return+        elif isinstance(exception, HazelcastInstanceNotActiveError):+            self._proxy.logger.info(base_msg + "HazelcastInstance is shutting down.")+        elif isinstance(exception, HazelcastClientNotActiveException):+            self._proxy.logger.info(base_msg + "HazelcastClient is shutting down.")+        elif isinstance(exception, DistributedObjectDestroyedError):+            self._proxy.logger.info(base_msg + "ReliableTopic is destroyed.")+        else:+            self._proxy.logger.warning(base_msg + "Unhandled error, message: " + str(exception))++        self._cancel_and_remove_listener()++    def _terminate(self, exception):+        with self._cancelled_lock:+            if self._cancelled:+                return True++        base_msg = "Terminating Message Listener: {} on topic: {}. Reason: ".format(self._id, self._proxy.name)+        try:+            terminate = self._listener.is_terminal()+            if terminate:+                self._proxy.logger.warning(base_msg + "Unhandled error: {}".format(str(exception)))+                return True++            self._proxy.logger.warning("MessageListener {} on topic: {} ran into an error: {}".+                                       format(self._id, self._proxy.name, str(exception)))+            return False++        except Exception as e:+            self._proxy.logger.warning(base_msg + "Unhandled error while calling ReliableMessageListener.is_terminal() "+                                                  "method: {}".format(str(e)))+            return True++    def _process(self, msg):+        try:+            self._listener.on_message(msg)+        except BaseException as e:+            if self._terminate(e):+                self._cancel_and_remove_listener()++    def _on_response(self, res):+        try:+            for message in res.result():+                with self._cancelled_lock:+                    if self._cancelled:+                        return++                msg = TopicMessage(+                    self._proxy.name,+                    message.payload,+                    message.publish_time,+                    message.publisher_address,+                    self._to_object+                )+                self._listener.store_sequence(self._sequence)+                self._process(msg)+                self._sequence += 1++            # Await for new messages+            self._next()+        except Exception as e:+            self._handle_exception(e)++    def _next(self):+        def _read_many():+            with self._cancelled_lock:+                if self._cancelled:+                    return++            future = self._proxy.ringbuffer.read_many(self._sequence, 1, self._proxy.config.read_batch_size)+            future.continue_with(self._on_response)++        self._proxy.client.reactor.add_timer(0, _read_many)++    def cancel(self):+        with self._cancelled_lock:+            self._cancelled = True++    def _cancel_and_remove_listener(self):+        try:+            # _proxy.remove_listener calls listener.cancel function+            self._proxy.remove_listener(self._id)+        except IllegalArgumentError as e:+            # This listener is already removed+            self._proxy.logger.debug("Failed to remove listener. Reason: {}".format(str(e)))+++class ReliableTopic(Proxy):+    """+    Hazelcast provides distribution mechanism for publishing messages that are delivered to multiple subscribers, which+    is also known as a publish/subscribe (pub/sub) messaging model. Publish and subscriptions are cluster-wide. When a+    member subscribes for a topic, it is actually registering for messages published by any member in the cluster,+    including the new members joined after you added the listener.++    Messages are ordered, meaning that listeners(subscribers) will process the messages in the order they are actually+    published.++    Hazelcast's Reliable Topic uses the same Topic interface as a regular topic. The main difference is that Reliable+    Topic is backed up by the Ringbuffer data structure, a replicated but not partitioned data structure that stores+    its data in a ring-like structure.+    """++    def __init__(self, client, service_name, name):+        super(ReliableTopic, self).__init__(client, service_name, name)++        config = client.config.reliable_topic_configs.get(name, None)+        if config is None:+            config = ReliableTopicConfig()++        self.client = client+        self.config = config+        self._topic_overload_policy = self.config.topic_overload_policy+        self.ringbuffer = client.get_ringbuffer("_hz_rb_" + name)+        self._message_listeners_lock = threading.RLock()+        self._message_listeners = {}++    def add_listener(self, reliable_topic_listener):+        """+        Subscribes to this reliable topic. When someone publishes a message on this topic, on_message() method of+        ReliableTopicListener is called.++        :param ReliableTopicListener: (Class), class to be used when a message is published.

:param ReliableTopicListener: -> :param reliable_topic_listener:

buraksezer

comment created time in 17 days

Pull request review commenthazelcast/hazelcast-python-client

Initial ReliableTopic implementation: #201

-from hazelcast.proxy.base import Proxy+import time+import threading+from uuid import uuid4 +from hazelcast.config import ReliableTopicConfig, TOPIC_OVERLOAD_POLICY+from hazelcast.exception import IllegalArgumentError, TopicOverflowError, HazelcastInstanceNotActiveError, \+    HazelcastClientNotActiveException, DistributedObjectDestroyedError, StaleSequenceError, OperationTimeoutError+from hazelcast.proxy.base import Proxy, TopicMessage+from hazelcast.proxy.ringbuffer import OVERFLOW_POLICY_FAIL, OVERFLOW_POLICY_OVERWRITE+from hazelcast.serialization.reliable_topic import ReliableTopicMessage+from hazelcast.util import current_time_in_millis+from hazelcast.six.moves import queue -class ReliableTopic(Proxy):-    def add_listener(self, on_message=None):+_INITIAL_BACKOFF = 0.1+_MAX_BACKOFF = 2+++class ReliableMessageListener(object):+    def on_message(self, item):+        """+        Invoked when a message is received for the added reliable topic.++        :param: message the message that is received for the added reliable topic+        """         raise NotImplementedError +    def retrieve_initial_sequence(self):+        """+        Retrieves the initial sequence from which this ReliableMessageListener+        should start.++        Return -1 if there is no initial sequence and you want to start+        from the next published message.++        If you intend to create a durable subscriber so you continue from where+        you stopped the previous time, load the previous sequence and add 1.+        If you don't add one, then you will be receiving the same message twice.++        :return: (int), the initial sequence+        """+        return -1++    def store_sequence(self, sequence):+        """"+        Informs the ReliableMessageListener that it should store the sequence.+        This method is called before the message is processed. Can be used to+        make a durable subscription.++        :param: (int) ``sequence`` the sequence+        """+        pass++    def is_loss_tolerant(self):+        """+        Checks if this ReliableMessageListener is able to deal with message loss.+        Even though the reliable topic promises to be reliable, it can be that a+        MessageListener is too slow. Eventually the message won't be available+        anymore.++        If the ReliableMessageListener is not loss tolerant and the topic detects+        that there are missing messages, it will terminate the+        ReliableMessageListener.++        :return: (bool) ``True`` if the ReliableMessageListener is tolerant towards losing messages.+        """+        return False++    def is_terminal(self):+        """+        Checks if the ReliableMessageListener should be terminated based on an+        exception thrown while calling on_message.++        :return: (bool) ``True` if the ReliableMessageListener should terminate itself, ``False`` if it should keep on running.+        """+        raise False+++class _MessageListener(object):+    def __init__(self, uuid, proxy, to_object, listener):+        self._id = uuid+        self._proxy = proxy+        self._to_object = to_object+        self._listener = listener+        self._cancelled_lock = threading.Lock()+        self._cancelled = False+        self._sequence = 0+        self._q = queue.Queue()++    def start(self):+        tail_seq = self._proxy.ringbuffer.tail_sequence()+        initial_seq = self._listener.retrieve_initial_sequence()+        if initial_seq == -1:+            initial_seq = tail_seq.result() + 1+        self._sequence = initial_seq+        self._proxy.client.reactor.add_timer(0, self._next)++    def _handle_illegal_argument_error(self):+        head_seq = self._proxy.ringbuffer.head_sequence().result()

So, if we are at this line, we are on the reactor thread. If you call .result() on reactor thread, you will have a deadlock (because the reactor thread itself is the one sets the results of futures). Therefore, there is a check on the future objects that forces you to use add_done_callback on scenarios like this. I believe with the current test suite, we are not hitting this line.

buraksezer

comment created time in 17 days

Pull request review commenthazelcast/hazelcast-python-client

Initial ReliableTopic implementation: #201

-from hazelcast.proxy.base import Proxy+import time+import threading+from uuid import uuid4 +from hazelcast.config import ReliableTopicConfig, TOPIC_OVERLOAD_POLICY+from hazelcast.exception import IllegalArgumentError, TopicOverflowError, HazelcastInstanceNotActiveError, \+    HazelcastClientNotActiveException, DistributedObjectDestroyedError, StaleSequenceError, OperationTimeoutError+from hazelcast.proxy.base import Proxy, TopicMessage+from hazelcast.proxy.ringbuffer import OVERFLOW_POLICY_FAIL, OVERFLOW_POLICY_OVERWRITE+from hazelcast.serialization.reliable_topic import ReliableTopicMessage+from hazelcast.util import current_time_in_millis+from hazelcast.six.moves import queue -class ReliableTopic(Proxy):-    def add_listener(self, on_message=None):+_INITIAL_BACKOFF = 0.1+_MAX_BACKOFF = 2+++class ReliableMessageListener(object):+    def on_message(self, item):+        """+        Invoked when a message is received for the added reliable topic.++        :param: message the message that is received for the added reliable topic+        """         raise NotImplementedError +    def retrieve_initial_sequence(self):+        """+        Retrieves the initial sequence from which this ReliableMessageListener+        should start.++        Return -1 if there is no initial sequence and you want to start+        from the next published message.++        If you intend to create a durable subscriber so you continue from where+        you stopped the previous time, load the previous sequence and add 1.+        If you don't add one, then you will be receiving the same message twice.++        :return: (int), the initial sequence+        """+        return -1++    def store_sequence(self, sequence):+        """"+        Informs the ReliableMessageListener that it should store the sequence.+        This method is called before the message is processed. Can be used to+        make a durable subscription.++        :param: (int) ``sequence`` the sequence+        """+        pass++    def is_loss_tolerant(self):+        """+        Checks if this ReliableMessageListener is able to deal with message loss.+        Even though the reliable topic promises to be reliable, it can be that a+        MessageListener is too slow. Eventually the message won't be available+        anymore.++        If the ReliableMessageListener is not loss tolerant and the topic detects+        that there are missing messages, it will terminate the+        ReliableMessageListener.++        :return: (bool) ``True`` if the ReliableMessageListener is tolerant towards losing messages.+        """+        return False++    def is_terminal(self):+        """+        Checks if the ReliableMessageListener should be terminated based on an+        exception thrown while calling on_message.++        :return: (bool) ``True` if the ReliableMessageListener should terminate itself, ``False`` if it should keep on running.+        """+        raise False+++class _MessageListener(object):+    def __init__(self, uuid, proxy, to_object, listener):+        self._id = uuid+        self._proxy = proxy+        self._to_object = to_object+        self._listener = listener+        self._cancelled_lock = threading.Lock()+        self._cancelled = False+        self._sequence = 0+        self._q = queue.Queue()++    def start(self):+        tail_seq = self._proxy.ringbuffer.tail_sequence()+        initial_seq = self._listener.retrieve_initial_sequence()+        if initial_seq == -1:+            initial_seq = tail_seq.result() + 1+        self._sequence = initial_seq+        self._proxy.client.reactor.add_timer(0, self._next)++    def _handle_illegal_argument_error(self):+        head_seq = self._proxy.ringbuffer.head_sequence().result()+        self._proxy.logger.warning("MessageListener {} on topic {} requested a too large sequence. Jumping from old "+                                   "sequence: {} to sequence: {}".format(self._id, self._proxy.name, self._sequence,+                                                                         head_seq))+        self._sequence = head_seq+        self._next()++    def _handle_stale_sequence_error(self):+        head_seq = self._proxy.ringbuffer.head_sequence().result()+        if self._listener.is_loss_tolerant:+            self._sequence = head_seq+            self._proxy.logger.warning("Topic {} ran into a stale sequence. Jumping from old sequence {} to new "+                                       "sequence {}".format(self._proxy.name, self._sequence, head_seq))+            self._next()+            return True++        self._proxy.logger.warning(+            "Terminating Message Listener: {} on topic: {}. Reason: The listener was too slow or the retention "+            "period of the message has been violated. Head: {}, sequence: {}".format(self._id, self._proxy.name,+                                                                                     head_seq, self._sequence))+        return False++    def _handle_operation_timeout_error(self):+        self._proxy.logger.info("Message Listener ", self._proxy.id, "on topic: ", self._proxy.name, " timed out. " ++                                "Continuing from the last known sequence ", self._proxy.sequence)+        self._next()++    def _handle_exception(self, exception):+        base_msg = "Terminating Message Listener: " + self._id + " on topic: " + self._proxy.name + ". Reason: "+        if isinstance(exception, IllegalArgumentError) and self._listener.is_loss_tolerant():+            self._handle_illegal_argument_error()+            return+        elif isinstance(exception, StaleSequenceError):+            if self._handle_stale_sequence_error():+                return+        elif isinstance(exception, OperationTimeoutError):+            self._handle_operation_timeout_error()+            return+        elif isinstance(exception, HazelcastInstanceNotActiveError):+            self._proxy.logger.info(base_msg + "HazelcastInstance is shutting down.")+        elif isinstance(exception, HazelcastClientNotActiveException):+            self._proxy.logger.info(base_msg + "HazelcastClient is shutting down.")+        elif isinstance(exception, DistributedObjectDestroyedError):+            self._proxy.logger.info(base_msg + "ReliableTopic is destroyed.")+        else:+            self._proxy.logger.warning(base_msg + "Unhandled error, message: " + str(exception))++        self._cancel_and_remove_listener()++    def _terminate(self, exception):+        with self._cancelled_lock:+            if self._cancelled:+                return True++        base_msg = "Terminating Message Listener: {} on topic: {}. Reason: ".format(self._id, self._proxy.name)+        try:+            terminate = self._listener.is_terminal()+            if terminate:+                self._proxy.logger.warning(base_msg + "Unhandled error: {}".format(str(exception)))+                return True++            self._proxy.logger.warning("MessageListener {} on topic: {} ran into an error: {}".+                                       format(self._id, self._proxy.name, str(exception)))+            return False++        except Exception as e:+            self._proxy.logger.warning(base_msg + "Unhandled error while calling ReliableMessageListener.is_terminal() "+                                                  "method: {}".format(str(e)))+            return True++    def _process(self, msg):+        try:+            self._listener.on_message(msg)+        except BaseException as e:+            if self._terminate(e):+                self._cancel_and_remove_listener()++    def _on_response(self, res):+        try:+            for message in res.result():+                with self._cancelled_lock:+                    if self._cancelled:+                        return++                msg = TopicMessage(+                    self._proxy.name,+                    message.payload,+                    message.publish_time,+                    message.publisher_address,+                    self._to_object+                )+                self._listener.store_sequence(self._sequence)+                self._process(msg)+                self._sequence += 1++            # Await for new messages+            self._next()+        except Exception as e:+            self._handle_exception(e)++    def _next(self):+        def _read_many():+            with self._cancelled_lock:+                if self._cancelled:+                    return++            future = self._proxy.ringbuffer.read_many(self._sequence, 1, self._proxy.config.read_batch_size)+            future.continue_with(self._on_response)++        self._proxy.client.reactor.add_timer(0, _read_many)++    def cancel(self):+        with self._cancelled_lock:+            self._cancelled = True++    def _cancel_and_remove_listener(self):+        try:+            # _proxy.remove_listener calls listener.cancel function+            self._proxy.remove_listener(self._id)+        except IllegalArgumentError as e:+            # This listener is already removed+            self._proxy.logger.debug("Failed to remove listener. Reason: {}".format(str(e)))+++class ReliableTopic(Proxy):+    """+    Hazelcast provides distribution mechanism for publishing messages that are delivered to multiple subscribers, which+    is also known as a publish/subscribe (pub/sub) messaging model. Publish and subscriptions are cluster-wide. When a+    member subscribes for a topic, it is actually registering for messages published by any member in the cluster,+    including the new members joined after you added the listener.++    Messages are ordered, meaning that listeners(subscribers) will process the messages in the order they are actually+    published.++    Hazelcast's Reliable Topic uses the same Topic interface as a regular topic. The main difference is that Reliable+    Topic is backed up by the Ringbuffer data structure, a replicated but not partitioned data structure that stores+    its data in a ring-like structure.+    """++    def __init__(self, client, service_name, name):+        super(ReliableTopic, self).__init__(client, service_name, name)++        config = client.config.reliable_topic_configs.get(name, None)+        if config is None:+            config = ReliableTopicConfig()++        self.client = client+        self.config = config+        self._topic_overload_policy = self.config.topic_overload_policy+        self.ringbuffer = client.get_ringbuffer("_hz_rb_" + name)+        self._message_listeners_lock = threading.RLock()+        self._message_listeners = {}++    def add_listener(self, reliable_topic_listener):+        """+        Subscribes to this reliable topic. When someone publishes a message on this topic, on_message() method of+        ReliableTopicListener is called.++        :param ReliableTopicListener: (Class), class to be used when a message is published.+        :return: (str), a registration id which is used as a key to remove the listener.+        """+        if not isinstance(reliable_topic_listener, ReliableMessageListener):+            raise IllegalArgumentError("Message listener is not an instance of ReliableTopicListener")++        registration_id = str(uuid4())+        listener = _MessageListener(registration_id, self, self._to_object, reliable_topic_listener)+        with self._message_listeners_lock:+            self._message_listeners[registration_id] = listener++        listener.start()+        return registration_id++    def _add_with_backoff(self, item):+        sleep_time = _INITIAL_BACKOFF+        while True:+            seq_id = self.ringbuffer.add(item, overflow_policy=OVERFLOW_POLICY_FAIL).result()+            if seq_id != -1:+                return+            time.sleep(sleep_time)+            sleep_time *= 2+            if sleep_time > _MAX_BACKOFF:+                sleep_time = _MAX_BACKOFF++    def _add_or_fail(self, item):+        seq_id = self.ringbuffer.add(item, overflow_policy=OVERFLOW_POLICY_FAIL).result()+        if seq_id == -1:+            raise TopicOverflowError("failed to publish message to topic: " + self.name)+     def publish(self, message):-        raise NotImplementedError+        """

This is a blocking API. In general the API provided in the 3.x versions are non-blocking apart from the proxy creations and listener registrations. So, I think publish should be non-blocking too.

buraksezer

comment created time in 17 days

Pull request review commenthazelcast/hazelcast-python-client

Initial ReliableTopic implementation: #201

-from hazelcast.proxy.base import Proxy+import time+import threading+from uuid import uuid4 +from hazelcast.config import ReliableTopicConfig, TOPIC_OVERLOAD_POLICY+from hazelcast.exception import IllegalArgumentError, TopicOverflowError, HazelcastInstanceNotActiveError, \+    HazelcastClientNotActiveException, DistributedObjectDestroyedError, StaleSequenceError, OperationTimeoutError+from hazelcast.proxy.base import Proxy, TopicMessage+from hazelcast.proxy.ringbuffer import OVERFLOW_POLICY_FAIL, OVERFLOW_POLICY_OVERWRITE+from hazelcast.serialization.reliable_topic import ReliableTopicMessage+from hazelcast.util import current_time_in_millis+from hazelcast.six.moves import queue -class ReliableTopic(Proxy):-    def add_listener(self, on_message=None):+_INITIAL_BACKOFF = 0.1+_MAX_BACKOFF = 2+++class ReliableMessageListener(object):+    def on_message(self, item):+        """+        Invoked when a message is received for the added reliable topic.++        :param: message the message that is received for the added reliable topic+        """         raise NotImplementedError +    def retrieve_initial_sequence(self):+        """+        Retrieves the initial sequence from which this ReliableMessageListener+        should start.++        Return -1 if there is no initial sequence and you want to start+        from the next published message.++        If you intend to create a durable subscriber so you continue from where+        you stopped the previous time, load the previous sequence and add 1.+        If you don't add one, then you will be receiving the same message twice.++        :return: (int), the initial sequence+        """+        return -1++    def store_sequence(self, sequence):+        """"+        Informs the ReliableMessageListener that it should store the sequence.+        This method is called before the message is processed. Can be used to+        make a durable subscription.++        :param: (int) ``sequence`` the sequence+        """+        pass++    def is_loss_tolerant(self):+        """+        Checks if this ReliableMessageListener is able to deal with message loss.+        Even though the reliable topic promises to be reliable, it can be that a+        MessageListener is too slow. Eventually the message won't be available+        anymore.++        If the ReliableMessageListener is not loss tolerant and the topic detects+        that there are missing messages, it will terminate the+        ReliableMessageListener.++        :return: (bool) ``True`` if the ReliableMessageListener is tolerant towards losing messages.+        """+        return False++    def is_terminal(self):+        """+        Checks if the ReliableMessageListener should be terminated based on an+        exception thrown while calling on_message.++        :return: (bool) ``True` if the ReliableMessageListener should terminate itself, ``False`` if it should keep on running.+        """+        raise False+++class _MessageListener(object):+    def __init__(self, uuid, proxy, to_object, listener):+        self._id = uuid+        self._proxy = proxy+        self._to_object = to_object+        self._listener = listener+        self._cancelled_lock = threading.Lock()+        self._cancelled = False+        self._sequence = 0+        self._q = queue.Queue()

I guess, this is leftover.

buraksezer

comment created time in 17 days

Pull request review commenthazelcast/hazelcast-python-client

Initial ReliableTopic implementation: #201

 def decode_add_listener(response):         def encode_remove_listener(registration_id):             return client_remove_distributed_object_listener_codec.encode_request(registration_id) -        return self._client.listener.register_listener(request, decode_add_listener,-                                                       encode_remove_listener, event_handler)+        return self._client._listener.register_listener(request, decode_add_listener,+                                                        encode_remove_listener, event_handler)      def remove_distributed_object_listener(self, registration_id):-        return self._client.listener.deregister_listener(registration_id)+        return self._client._listener.deregister_listener(registration_id)

This should be reverted back. Client does not have _listener, but instead listener as the reference to the listener service.

buraksezer

comment created time in 17 days

Pull request review commenthazelcast/hazelcast-python-client

Initial ReliableTopic implementation: #201

 def decode_add_listener(response):         def encode_remove_listener(registration_id):             return client_remove_distributed_object_listener_codec.encode_request(registration_id) -        return self._client.listener.register_listener(request, decode_add_listener,-                                                       encode_remove_listener, event_handler)+        return self._client._listener.register_listener(request, decode_add_listener,

This should be reverted back. Client does not have _listener, but instead listener as the reference to the listener service.

buraksezer

comment created time in 17 days

push eventmdumandag/hazelcast-nodejs-client

mdumandag

commit sha d03e241254db570669d392ab9f8e41d31e01b222

wip

view details

push time in 17 days

create barnchmdumandag/hazelcast-nodejs-client

branch : update-dependencies

created branch time in 20 days

PR opened hazelcast/hazelcast-nodejs-client

Remove safe buffer dependency

We are going to pick Node.js 8 as the minimum supported version, which contains polyfill methods implemented by the safe-buffer. Hence, there is no need for an extra dependency for these.

Protocol PR: https://github.com/hazelcast/hazelcast-client-protocol/pull/321

+19830 -15538

0 comment

542 changed files

pr created time in 21 days

create barnchmdumandag/hazelcast-client-protocol

branch : remove-safe-buffer

created branch time in 21 days

pull request commenthazelcast/hazelcast-python-client

Initial ReliableTopic implementation: #201

Hi @buraksezer , thank you so much for your efforts. It is great to see a community activity on the python client. I will be reviewing the PR in detail tomorrow, but I think it looks pretty good. For now, let me answer your questions.

  1. Our ObjectDataInput#read_byte_array method returns list of byte-sized integers if the option you mentioned is not present. Honestly, I feel like the implementation should always return bytearray but, for backward compatibility reasons we cannot change it. The problem you have faced is in this line. https://github.com/hazelcast/hazelcast-python-client/blob/master/hazelcast/serialization/input.py#L150 . We are passing a list instead of bytearray. There is a missing check here. We should cast buff back into bytearray if self._respect_bytearrays is true. We will change the implementation of read_byte_array in 4.0 release, but for now we can use this.

  2. I ran the test suite more than 10 times but couldn't reproduce the problem. Also, from the logs, I cannot tell anything useful. It seems like the member was not responding but I don't know why. I will look into it in more detail tomorrow.

  3. You can simply pass None. It is there because the Java member side reliable topic proxy sends this data as a part of the message but clients does not. In the Java side, both member and the client use the same class so this option has to be present but can be None.

  4. I am fine with returning Unix timestamp. It is easy to convert it to datetime object but I would like to leave this to user.

buraksezer

comment created time in 22 days

push eventmdumandag/hazelcast-nodejs-client

mdumandag

commit sha e992bad4c4dff1c7b25b0011f6607269f1e62bf7

fix listener service test

view details

mdumandag

commit sha f67a701e97f23422b3623626cd11354900ab3b29

remove StringSerializationPolicy option Since IMDG 4.0 support standard UTF-8 spec, there is no need to support legacy IMDG v3 UTF-8 serialization. Therefore, this PR removes the StringSerializationPolicy option and uses standard UTF-8 encode/decode mechanisms provided by the Node.js

view details

push time in 23 days

push eventmdumandag/hazelcast-nodejs-client

mdumandag

commit sha e992bad4c4dff1c7b25b0011f6607269f1e62bf7

fix listener service test

view details

push time in 23 days

push eventmdumandag/hazelcast-nodejs-client

mdumandag

commit sha c54aff5eb983f796f0c753ab1f6e04359d4942ca

remove StringSerializationPolicy option Since IMDG 4.0 support standard UTF-8 spec, there is no need to support legacy IMDG v3 UTF-8 serialization. Therefore, this PR removes the StringSerializationPolicy option and uses standard UTF-8 encode/decode mechanisms provided by the Node.js

view details

push time in 23 days

PR opened hazelcast/hazelcast-nodejs-client

Remove string serialization policy

Since IMDG 4.0 support standard UTF-8 spec, there is no need to support legacy IMDG v3 UTF-8 serialization. Therefore, this PR removes the StringSerializationPolicy option and uses standard UTF-8 encode/decode mechanisms provided by the Node.js

depends on #522

+20162 -15790

0 comment

540 changed files

pr created time in 23 days

push eventmdumandag/hazelcast-nodejs-client

mdumandag

commit sha c8405eae613fb7c183c590745ac271cd1d14ccad

clean up bitutil

view details

push time in 24 days

push eventmdumandag/hazelcast-nodejs-client

mdumandag

commit sha 6405a4ed8e75d3b189844cbe1b62f2761b0144ca

remove unused authenticator

view details

push time in 24 days

push eventmdumandag/hazelcast-nodejs-client

mdumandag

commit sha 49f563aea40e888cce50aff89ddd088715992209

review some TODOs

view details

push time in 24 days

push eventmdumandag/hazelcast-nodejs-client

mdumandag

commit sha cadb741e22593220215cad932db6ceaa1a6a5ae4

fix index config test

view details

push time in 24 days

push eventmdumandag/hazelcast-nodejs-client

mdumandag

commit sha f6c36801632adabd19d18bd4f68c0e9586a15cdf

update copyright

view details

push time in 24 days

push eventmdumandag/hazelcast-nodejs-client

mdumandag

commit sha 573a51a9dc9e5e6808a3c139c66062299422dc9b

simplify invocation retry logic

view details

push time in 24 days

push eventmdumandag/hazelcast-nodejs-client

mdumandag

commit sha b5f2d8b65ac757907e45620fa35ed3d1cdeac239

add connection strategy test

view details

push time in a month

push eventmdumandag/hazelcast-nodejs-client

mdumandag

commit sha 11ee0b24da59dba5626237e96fafde110e2f307e

add connection strategy test

view details

push time in a month

push eventmdumandag/hazelcast-nodejs-client

mdumandag

commit sha 08ae006a25229425e51d59e2ec00d5cb0da02224

add connection strategy test

view details

push time in a month

more