profile
viewpoint
Karel Minarik karmi @elastic Prague, Czech Republic http://www.karmi.cz

HonzaKral/es-django-example 162

Example Django project using Elasticsearch

karmi/chef-hello-cloud 60

A demo of a full stack Rails application deployment (1x load balancer, 3x appserver, 1x database, 3x elasticsearch) with Chef Server

karmi/chef-solo-hello-world 31

A tutorial for Chef Solo

karmi/couchdb-showcase 19

A small application to demonstrate basic CouchDB features

karmi/clearance_http_auth 17

Simple, instant HTTP Basic Authentication for applications using Clearance

karmi/colibriary 8

Demo application to show how to use Cucumber stories in Czech

karmi/elastic-stack-demo 3

Demo of the observability features of the Elastic Stack

dryaf/demo_mongoid_tire 2

sample application and bug reporting app

karmi/bouncy_air 2

Just. Fooling. Around

issue commentelastic/go-elasticsearch

Get empty slice when reading body though logger reports valid response size

Curious about the logger — there's a couple of different loggers provided by the package, including a JSON-based one. Did you have some specific requirement which they didn't support?

sptrakesh

comment created time in 2 days

issue commentelastic/go-elasticsearch

Get empty slice when reading body though logger reports valid response size

Just so I understand, the exact same code, running against Elastic Cloud, works locally, but not when executed in the GKE pod?

sptrakesh

comment created time in 3 days

issue commentelastic/go-elasticsearch

Get empty slice when reading body though logger reports valid response size

I'll try to replicate against an Elastic Cloud cluster, but in the meantime, can you fmt.Println(res), to see if the body is indeed empty? Also, you can enable the response body logging. What happens when you curl the document URL?

sptrakesh

comment created time in 3 days

issue commentelastic/go-elasticsearch

Get empty slice when reading body though logger reports valid response size

Hi, this is indeed weird. Could you please post more information about the version of the client, and the snippet of the Go code which executes the ES API? Also, this is running against Elastic Cloud?

sptrakesh

comment created time in 3 days

issue commentelastic/go-elasticsearch

Weird scroll param issue

Argh! Yeah, without the version, it perhaps loads an ancient version, which had a bug for duration. No worries, glad it's sorted out!

rif

comment created time in 3 days

issue commentelastic/go-elasticsearch

Weird scroll param issue

Hey! This is pretty weird — there's a bunch of tests for the Scroll API. Also, I'm not aware of any change to the related code in a long time.

Here's an isolated test I've been using locally, it runs fine on the main and 7.x branch, can you give it a try?

//+build ignore

package main

import (
	"bytes"
	"io"
	"log"
	"os"
	"strconv"
	"strings"
	"time"

	"github.com/elastic/go-elasticsearch/v7"
	"github.com/elastic/go-elasticsearch/v7/estransport"
	"github.com/tidwall/gjson"
)

func main() {
	log.SetFlags(0)

	var (
		batchNum int
		scrollID string

		scrollDuration = 15 * time.Second
	)

	es, _ := elasticsearch.NewClient(elasticsearch.Config{
		Logger: &estransport.ColorLogger{
			Output:             os.Stdout,
			EnableRequestBody:  true,
			EnableResponseBody: true,
		},
	})

	// Index 100 documents into the "test-scroll" index
	//
	log.Println("Indexing the documents...")
	for i := 1; i <= 100; i++ {
		res, err := es.Index(
			"test-scroll",
			strings.NewReader(`{"title" : "test"}`),
			es.Index.WithDocumentID(strconv.Itoa(i)),
		)
		if err != nil || res.IsError() {
			log.Fatalf("Error: %s: %s", err, res)
		}
	}
	es.Indices.Refresh(es.Indices.Refresh.WithIndex("test-scroll"))

	// Perform the initial search request to get
	// the first batch of data and the scroll ID
	//
	log.Println("Scrolling the index...")
	log.Println(strings.Repeat("-", 80))
	res, _ := es.Search(
		es.Search.WithIndex("test-scroll"),
		es.Search.WithSort("_doc"),
		es.Search.WithSize(10),
		es.Search.WithScroll(scrollDuration),
	)

	// Handle the first batch of data and extract the scrollID
	//
	json := read(res.Body)
	res.Body.Close()

	scrollID = gjson.Get(json, "_scroll_id").String()

	log.Println("Batch   ", batchNum)
	log.Println("ScrollID", scrollID)
	log.Println("IDs     ", gjson.Get(json, "hits.hits.#._id"))
	log.Println(strings.Repeat("-", 80))

	// Perform the scroll requests in sequence
	//
	for {
		batchNum++

		// Perform the scroll request and pass the scrollID and scroll duration
		//
		res, err := es.Scroll(es.Scroll.WithScrollID(scrollID), es.Scroll.WithScroll(scrollDuration))
		if err != nil {
			log.Fatalf("Error: %s", err)
		}
		if res.IsError() {
			log.Fatalf("Error response: %s", res)
		}

		json := read(res.Body)
		res.Body.Close()

		// Extract the scrollID from response
		//
		scrollID = gjson.Get(json, "_scroll_id").String()

		// Extract the search results
		//
		hits := gjson.Get(json, "hits.hits")

		// Break out of the loop when there are no results
		//
		if len(hits.Array()) < 1 {
			log.Println("Finished scrolling")
			break
		} else {
			log.Println("Batch   ", batchNum)
			log.Println("ScrollID", scrollID)
			log.Println("IDs     ", gjson.Get(hits.Raw, "#._id"))
			log.Println(strings.Repeat("-", 80))
		}
	}
}

func read(r io.Reader) string {
	var b bytes.Buffer
	b.ReadFrom(r)
	return b.String()
}
rif

comment created time in 3 days

issue commentelastic/go-elasticsearch

Got 'An HTTP line is larger than 4096 bytes' error when use scroll mechanism

Hi! This is a known issue. The client is 1:1 to the ES API, and therefore c.Scroll.WithScrollID() will pass the value as an URL parameter, which will break for large values. The solution is to pass the value in the Body.

DmSide

comment created time in 4 days

PR closed elastic/go-elasticsearch

BulkIndexer - Reinstate item.Body after it is consumed

This addresses the bug documented here: #161 Duplicate of https://github.com/elastic/go-elasticsearch/pull/164

+46 -5

6 comments

2 changed files

l1redd

pr closed time in 5 days

pull request commentelastic/go-elasticsearch

BulkIndexer - Reinstate item.Body after it is consumed

Merged via 8413c97f and 09b44b16.

l1redd

comment created time in 5 days

pull request commentelastic/go-elasticsearch

BulkIndexer - Reinstate item.Body after it is consumed

You're right — the linter complains about a struct field in the esapi package, which is something which has to be addressed in the generator. I'll look into that soon.

Many thanks for the patch! I've tweaked the commit message a bit, to make it more consistent with the repo layout, and merged into the main and 7.x branches.

l1redd

comment created time in 5 days

push eventelastic/go-elasticsearch

Lindsey Redd

commit sha 09b44b16bdb7ac5d16395907dd4d44974bc8d875

Util: Reinstate item.Body after it is consumed in BulkIndexer To allow accessing the item body in success/error callbacks, reinstate it when the callbacks are defined. Closes #161 (cherry picked from commit 55d65cdfc77a15b545e12339d613f9a7270fef2f)

view details

push time in 5 days

push eventelastic/go-elasticsearch

Lindsey Redd

commit sha 8413c97f30112984737796ca9db1e93a11fe7e5a

Util: Reinstate item.Body after it is consumed in BulkIndexer To allow accessing the item body in success/error callbacks, reinstate it when the callbacks are defined. Closes #161 (cherry picked from commit 55d65cdfc77a15b545e12339d613f9a7270fef2f)

view details

push time in 5 days

issue closedelastic/go-elasticsearch

[BUG] BulkIndexerItem body io.Reader used before OnFailure is called

It is impossible to read from the item.Body io.Reader that is in the function signature of OnFailure in the BulkIndexerItem because it has already been read from in the esutil/bulk_indexer.go code.

The item.Body io.Reader is read from here: https://github.com/elastic/go-elasticsearch/blob/68f14edba6d35cbf14652c26f89f2e99b5e4cb6c/esutil/bulk_indexer.go#L415

Then this same item is used as an argument in the OnFailure function: https://github.com/elastic/go-elasticsearch/blob/68f14edba6d35cbf14652c26f89f2e99b5e4cb6c/esutil/bulk_indexer.go#L524

So when I try to access it in the OnFailure function, it returns an empty string. For example:

err = bi.Add(
	ctx,
	esutil.BulkIndexerItem{
		Action:     "index",
		Body:       bytes.NewReader(someBytes),
		DocumentID: "1",
		OnFailure: func(ctx context.Context, item esutil.BulkIndexerItem, res esutil.BulkIndexerResponseItem, err error) {

			if err != nil {
				log.Printf("error")
			} else {
				log.Printf("error in response body")
			}
			// **** Reading from item.Body returns an empty string***
			bodyBytes, err := ioutil.ReadAll(item.Body)
                        bodyString := string(bodyBytes) // bodyString is ""
                        log.Printf("Body: %s", bodyString)
		},
	},
)

Can I get some help on fixing this bug. I think the body needs to be saved when it's read, and then the item's body io.Reader just needs to be reinstantiated with that body before being used as an argument in OnFailure. Happy to submit a bug fix PR if that would be faster. Thanks!

closed time in 5 days

l1redd

issue commentelastic/go-elasticsearch

Memory leak when losing connection with server

I've fixed the memory leak and merged the patch into master and 7.x branches.

This was a little hard to reproduce for me, since the OS reported a stable value with tools such as ps -axm -o %mem,rss,comm | grep 'go$', and I had to observe the memory growing by using the runtime Go package.

For the record, this is the full test script I've used:

//+build ignore

package main

import (
	"context"
	"fmt"
	"io"
	"io/ioutil"
	"log"
	"math"
	"net/http"
	"os"
	"runtime"
	"strings"
	"time"

	_ "net/http/pprof"

	"github.com/elastic/go-elasticsearch/v8"
	"github.com/elastic/go-elasticsearch/v8/esapi"
	"github.com/elastic/go-elasticsearch/v8/estransport"
)

var (
	_ = fmt.Print
	_ = context.WithTimeout
	_ = math.Exp
	_ = strings.NewReader
	_ = http.DefaultClient
)

func main() {
	log.SetFlags(0)

	go func() { log.Fatalln(http.ListenAndServe("localhost:6060", nil)) }()

	count := 10000

	cfg := elasticsearch.Config{
		Addresses: []string{"http://localhost:8000"},
		Logger:    &estransport.ColorLogger{Output: os.Stdout},
		// RetryBackoff:  func(i int) time.Duration { return time.Duration(0) },
	}

	es, err := elasticsearch.NewClient(cfg)
	if err != nil {
		log.Fatalf("Error creating the client: %s", err)
	}
	log.Println("Client ready with URLs:", es.Transport.(*estransport.Client).URLs())

	for i := 0; i < count; i++ {
		var (
			res *esapi.Response
			err error
		)

		res, err = es.Info()

		if err != nil {
			log.Printf("Error getting response: [%T]: %s", err, err)
		}

		io.Copy(ioutil.Discard, res.Body)
		res.Body.Close()

		if i%10 == 0 {
			printMemUsage()
		}
	}

	log.Println("Waiting...")
	time.Sleep(time.Hour)
}

func printMemUsage() {
	var m runtime.MemStats
	runtime.ReadMemStats(&m)
	fmt.Printf("   m.Sys: %v MB\n", m.Sys/1024/1024)
}

This is the Nginx configuration used:

# docker run --name nginx --volume $PWD/nginx-proxy.conf:/etc/nginx/nginx.conf --publish 8000:8000 --network elasticsearch --rm nginx

user nginx;

worker_processes auto;
worker_rlimit_nofile 10240;
pid nginx.pid;

error_log off;

events {
  worker_connections  10240;
  accept_mutex        off;
  multi_accept        off;
}

http {
	upstream elasticsearch {
    server es1:9200;
  }

  server {
    listen 8000 default_server;

		location / {
      proxy_pass http://elasticsearch;
    }

    location /status {
      stub_status;
    }
  }
}

cuong58

comment created time in 5 days

push eventelastic/go-elasticsearch

Karel Minarik

commit sha d51b862ad36af95df6391833773a7b72c42d939b

Transport: Fix memory leak when retrying 5xx responses To prevent a memory leak when a request with a retryable status code is executed (502, etc), drain and close the body before the request is retried. Fixes #159 (cherry picked from commit c50a1782abe8b03b7ff2db2cd859ac48508c7aac)

view details

push time in 5 days

push eventelastic/go-elasticsearch

Karel Minarik

commit sha c50a1782abe8b03b7ff2db2cd859ac48508c7aac

Transport: Fix memory leak when retrying 5xx responses To prevent a memory leak when a request with a retryable status code is executed (502, etc), drain and close the body before the request is retried. Fixes #159

view details

push time in 5 days

issue closedelastic/go-elasticsearch

Memory leak when losing connection with server

Issue

Our service serves 40-50 requests/sec. When it lost connection with the ES server for 15 minutes, the amount of memory occupied increases to more than 300MB. I have found out where the memory is leaking: https://github.com/elastic/go-elasticsearch/blob/698f9f3b9fe3cac9c648569e095e5ad8d2c671c3/estransport/estransport.go#L258 When losing connection with the server, err == nil but res.StatusCode == 502 and res.Body contains error message data. If max tries > 1, I think that res.Body.Close() should be called before the next retry.

How to reproduce this bug

  1. Use Nginx to route all requests from http://0.0.0.0:9200 to http://0.0.0.0:9201 that is not available.
  2. Create an es client to send 10.000 search requests to http://0.0.0.0:9200.
  3. Check memory occupied by the client process.

closed time in 5 days

cuong58

issue closedelastic/go-elasticsearch

Writing query for multiple conditions

query := map[string]interface{}{ "query": map[string]interface{}{ "bool": map[string]interface{}{ "must": []interface{}{ map[string]interface{}{ "term": map[string]interface{}{ "user_status": map[string]interface{}{ "value": 1, }, }, }, map[string]interface{}{ "term": map[string]interface{}{ "user_id": map[string]interface{}{ "value": 453027897653854208, }, }, }, }, }}, }

closed time in 9 days

readingtfsc

issue closedelastic/go-elasticsearch

how to disable host name validation

When I use go-elasticsearch to connect the remote elasticsearch which configure TLS. I am in trouble for the connection refused problem.

ERROR: Unable to get response: x509: cannot validate certificate for 10.113.215.206 because it doesn't contain any IP SANs

Can I disable X509 host name validation?

closed time in 9 days

ZhiXingHeYiApple

push eventelastic/go-elasticsearch

Karel Minarik

commit sha 6ca7762232c7802d364c111507f871841cd42aaa

Generator: Docs: Update the configuration and rules

view details

Karel Minarik

commit sha 2c4ce25bd45cea1f119dc1951374eed61a0638c3

Examples: Update the generated documentation examples

view details

push time in 10 days

pull request commentelastic/go-elasticsearch

Reinstate item.Body after it is consumed

Hi @l1redd, the patch looks good. I was concerned about using the pointer at first, but it doesn't appear to impact the allocations when I run the benchmarks locally. I see the CLA Checker complains, can you please sign the CLA at https://www.elastic.co/contributor-agreement?

l1redd

comment created time in 10 days

issue commentelastic/go-elasticsearch

Use context to pass value to bulk importer callback

You've mentioned Redis and deduplication, which sounds interesting. Could you provide more sample code for what you're doing in there? It would also help me to model the situation when I look into the implementation.

mbudge

comment created time in 10 days

Pull request review commentelastic/elasticsearch-js

Added option to run the benchmarks sequentially or concurrently

 class Action {     assert(opts.action, 'Missing action name')     assert(opts.action, 'Missing action category')     assert(opts.measure, 'Missing action measure function')+    assert(opts.run, 'Missing actionrun type (sequential or concurrent)')
    assert(opts.run, 'Missing action run type (sequential or concurrent)')
delvedor

comment created time in 11 days

push eventelastic/go-elasticsearch

Karel Minarik

commit sha 26ee26eb54074c5e521eb7715ad4eac2b1ac6454

API: Update the APIs for Elasticsearch 7.9 (7dda9934f90)

view details

push time in 12 days

push eventelastic/go-elasticsearch

Karel Minarik

commit sha c07108ae1aaafec37b79fc2fc2241ef799e6459f

API: Update the APIs for Elasticsearch 8.x (3b61ec1fe27)

view details

push time in 12 days

issue commentelastic/go-elasticsearch

Use context to pass value to bulk importer callback

Hi, right, I had usecases like this in mind when I added the ctx argument to the signatures. I guess you're saying the context is not being correctly propagated from the Add() method to the callbacks?

mbudge

comment created time in 12 days

release elastic/go-elasticsearch

v7.8.0

released time in 17 days

created tagelastic/go-elasticsearch

tagv7.8.0

The official Go client for Elasticsearch

created time in 17 days

push eventelastic/go-elasticsearch

Karel Minarik

commit sha 66f9ff5f7952002fd06a4c3c31f553ddcc931243

API: Update the APIs for Elasticsearch 7.8 (75731469564)

view details

push time in 17 days

issue commentelastic/go-elasticsearch

Find() ret Root field value Pointer maybe nil

No worries at all :)

micheal0929

comment created time in 18 days

push eventelastic/go-elasticsearch

Karel Minarik

commit sha eacc993fb606983dc5a3929c2438a06ee0456f5e

Generator: Tests: Update the list of skipped tests

view details

Karel Minarik

commit sha 2cd58089505efdd3093a43921dd8d69f0b9fde07

Generator: Tests: Fix handling of int/float (cherry picked from commit 76170c76711a3b0d17c5c3fd1ded945f16dc9430) (cherry picked from commit 2e880d21505a619512cc72409ec1981b030e7bb8)

view details

Karel Minarik

commit sha 77ff7b9ccefd17f19d5e5d29889d67ec91989682

Generator: Tests: Improve the handling of unsupported features (cherry picked from commit 3e398adc67a25ed83760275ecb06d5242d0e3246)

view details

push time in 18 days

push eventelastic/go-elasticsearch

Karel Minarik

commit sha ecca0d5c32e0c1d870eaf44949c7bb2ee13d2dd6

Generator: Tests: Improve the handling of unsupported features (cherry picked from commit 3e398adc67a25ed83760275ecb06d5242d0e3246)

view details

push time in 18 days

push eventelastic/go-elasticsearch

Karel Minarik

commit sha 3e398adc67a25ed83760275ecb06d5242d0e3246

Generator: Tests: Improve the handling of unsupported features

view details

push time in 18 days

push eventelastic/go-elasticsearch

Karel Minarik

commit sha 698518f82d6b705fb7374d5dbab0a310204abc48

CI: Update the Jenkins configuration Use the `STACK_VERSION` environment variable defined in the `master` branch.

view details

Karel Minarik

commit sha ab633e55765e9b4ab77f10be3a021a00a6d0b3f2

Generator: Tests: Update the list of skipped tests

view details

Karel Minarik

commit sha b7426574cdd513f3231212797e2d9625fdf57102

Generator: Tests: Fix handling of int/float (cherry picked from commit 76170c76711a3b0d17c5c3fd1ded945f16dc9430) (cherry picked from commit 2e880d21505a619512cc72409ec1981b030e7bb8)

view details

push time in 18 days

push eventelastic/go-elasticsearch

Karel Minarik

commit sha 08b93f51354abc151429653d74145b339bb94866

CI: Update the Jenkins configuration Use the `STACK_VERSION` environment variable defined in the `master` branch.

view details

Karel Minarik

commit sha be115b4234831e47490d657ba003b52e135406a9

Generator: Tests: Update the list of skipped tests

view details

Karel Minarik

commit sha 6b1c638e4c3d8b0e51f21f6d65421a0d212eabfb

Generator: Tests: Fix handling of int/float (cherry picked from commit 76170c76711a3b0d17c5c3fd1ded945f16dc9430) (cherry picked from commit 2e880d21505a619512cc72409ec1981b030e7bb8)

view details

push time in 18 days

push eventelastic/go-elasticsearch

Karel Minarik

commit sha 7d49bbba198e2af3efedda0430922cdbcd2bb183

CI: Update the Jenkins configuration Use the `STACK_VERSION` environment variable defined in the `master` branch.

view details

Karel Minarik

commit sha 738a067dcba12d37b29eeb592370b4e353667361

Generator: Tests: Update the list of skipped tests

view details

Karel Minarik

commit sha b68b96c85f9b3521ce52695397c71211b9758497

Generator: Tests: Fix handling of int/float (cherry picked from commit 76170c76711a3b0d17c5c3fd1ded945f16dc9430) (cherry picked from commit 2e880d21505a619512cc72409ec1981b030e7bb8)

view details

push time in 18 days

push eventelastic/go-elasticsearch

Karel Minarik

commit sha 56054e53e45b966b0975ead7f8a3c13eb17a96a7

CI: Update the Jenkins configuration Use the `STACK_VERSION` environment variable defined in the `master` branch.

view details

Karel Minarik

commit sha 18d647209d6610b9a0f35f945c85d62adec0bcf7

Generator: Tests: Update the list of skipped tests

view details

Karel Minarik

commit sha 4e1f80afb9f9668fff5146bcea1047c433363d05

Generator: Tests: Fix handling of int/float (cherry picked from commit 76170c76711a3b0d17c5c3fd1ded945f16dc9430) (cherry picked from commit 2e880d21505a619512cc72409ec1981b030e7bb8)

view details

push time in 18 days

push eventelastic/go-elasticsearch

Karel Minarik

commit sha ff107244e3cf595f262c3a3db77ad685de70680a

CI: Update the Jenkins configuration Use the `STACK_VERSION` environment variable defined in the `master` branch.

view details

push time in 18 days

issue commentelastic/go-elasticsearch

[BUG] BulkIndexerItem body io.Reader used before OnFailure is called

You're right, the item body is consumed when it's serialized and sent over the wire. To access it from the callback, it needs to be duplicated and reinstated indeed, in a similar manner to what happens in https://github.com/elastic/go-elasticsearch/blob/68f14edba6d35cbf14652c26f89f2e99b5e4cb6c/estransport/estransport.go#L248 or https://github.com/elastic/go-elasticsearch/blob/68f14edba6d35cbf14652c26f89f2e99b5e4cb6c/estransport/estransport.go#L264

The implementation here needs to be careful to not duplicate the body when the callbacks are not defined, in order to not allocate more than is needed. I'd be happy to assist with a PR, and I can also look into it myself, after I clean up some other things going on right now.

l1redd

comment created time in 18 days

push eventelastic/go-elasticsearch

Karel Minarik

commit sha 9931acfbfa6f13ecddb393522ef6cae1ffda453c

Generator: Tests: Update the list of skipped tests

view details

Karel Minarik

commit sha 2e880d21505a619512cc72409ec1981b030e7bb8

Generator: Tests: Fix common setup and handling int/float (cherry picked from commit 76170c76711a3b0d17c5c3fd1ded945f16dc9430)

view details

Karel Minarik

commit sha a252e6d1d127a48170c997fe39368a8a45229fb2

CI: Update the Jenkins configuration Use the `STACK_VERSION` environment variable defined in the `master` branch.

view details

push time in 18 days

issue commentelastic/go-elasticsearch

how to disable host name validation

Hello, to customize the transport settings, see the example in the README, directly after the “To configure other HTTP settings (...)” sentence.

An executable example is available here:

https://github.com/elastic/go-elasticsearch/blob/68f14edba6d35cbf14652c26f89f2e99b5e4cb6c/_examples/configuration.go#L27-L38

As you are no doubt aware, disabling the validation is not the best option, and it's best to provide a certificate to the client — it exposes the CACert configuration option to make this easy.

ZhiXingHeYiApple

comment created time in 18 days

issue commentelastic/go-elasticsearch

Memory leak when losing connection with server

Hi, sorry for the delay here! So I understand that the error happens when a proxy in the middle returns an 502 error, is that right? If the client is retrying in this situation, res.Body should be closed (and drained) after the following code:

https://github.com/elastic/go-elasticsearch/blob/68f14edba6d35cbf14652c26f89f2e99b5e4cb6c/estransport/estransport.go#L307-L313

Is that right? I'll have a look into a reproduction and a fix.

cuong58

comment created time in 19 days

push eventelastic/go-elasticsearch

Karel Minarik

commit sha 1062bee2e4011ce7e5ef37f980b84b7ae301aaec

API: Update the APIs for Elasticsearch 7.9 (2edcd064fe9)

view details

Karel Minarik

commit sha ff6b9cddcf6343e4f23236500074a57e11e04029

Update version to 7.9-SNAPSHOT

view details

push time in 19 days

push eventelastic/go-elasticsearch

Karel Minarik

commit sha 303c514d26a61158339bee74d52f69ace9175452

API: Update the APIs for Elasticsearch 7.8 (cd7638416b4)

view details

push time in 19 days

push eventelastic/go-elasticsearch

Karel Minarik

commit sha 68f14edba6d35cbf14652c26f89f2e99b5e4cb6c

API: Update the APIs for Elasticsearch 8.x (e926994d875)

view details

push time in 19 days

issue commentelastic/go-elasticsearch

Find() ret Root field value Pointer maybe nil

Hello, are you sure you've opened this issue in a correct repository?

micheal0929

comment created time in 19 days

startedomnibrain/svguitar

started time in 21 days

push eventelastic/go-elasticsearch

Karel Minarik

commit sha a8e398c27161168cd16b8aec9e8c4dfb05df9e7b

Examples: Update the generated documentation examples

view details

push time in 25 days

push eventelastic/go-elasticsearch

Karel Minarik

commit sha 9e5ab89a1a125a589ec3743b72ed4fe22a379539

Generator: Tests: Update the list of skipped tests

view details

Karel Minarik

commit sha 76170c76711a3b0d17c5c3fd1ded945f16dc9430

Generator: Tests: Fix common setup and handling int/float

view details

push time in 25 days

push eventelastic/go-elasticsearch

Karel Minarik

commit sha 698f9f3b9fe3cac9c648569e095e5ad8d2c671c3

API: Update the APIs for Elasticsearch 8.x (99aeed6acf4)

view details

push time in 25 days

issue commentelastic/go-elasticsearch

BulkIndexer cannot insert data to Elastic Search < 7.0.0

I've released https://github.com/elastic/go-elasticsearch/releases/tag/v6.8.10 with the patch.

Llewellin

comment created time in a month

created tagelastic/go-elasticsearch

tagv6.8.10

The official Go client for Elasticsearch

created time in a month

push eventelastic/go-elasticsearch

Karel Minarik

commit sha 996ee1fe69ab40ad65795e48d125b6b4395b6ec3

Release 6.8.10

view details

Karel Minarik

commit sha b4979d4be640f5efa21437d3cf13675af883efb7

Update version to 6.8.11-SNAPSHOT

view details

push time in a month

issue commentelastic/go-elasticsearch

BulkIndexer cannot insert data to Elastic Search < 7.0.0

Hello, for Elasticsearch 6.x, you have to use the corresponding version of the client, ie. 6.x. The support for document type was added in https://github.com/elastic/go-elasticsearch/commit/c022ac3fb26cdd903e9afe99a5562c41f0e83b64. Can you try with the 6.x branch?

It appears as the patch has not been released in a Git tag yet, I'll work on that.

Llewellin

comment created time in a month

issue commentelastic/go-elasticsearch

How to simple writing??

I understand what you're trying to achieve. The package accepts an io.Reader as the query definition, so it's up to you to provide the query — it could be a serialization of map[string]interface{} —, or it could be simply a strings.Reader. Have a look into _examples/encoding and _examples/xkcdsearch for samples.

In Elasticsearch terms, you probably want the Bool query — if you look into the documentation, and switch the language to Go, you'll see an example.

readingtfsc

comment created time in a month

issue commentelastic/go-elasticsearch

How to simple writing??

Hello, please provide more information about what are you trying to achieve, what is not working, and so on.

readingtfsc

comment created time in a month

issue commentelastic/go-elasticsearch

Error while adding go-elasticsearch package

Still having the issue :/

Can you elaborate on what kind of issue you're having?

pallavibatra-cpi

comment created time in a month

issue commentelastic/go-elasticsearch

Dep ensure does not work

Thanks for the kind words, @yukels!

I'm aware that there must be a lot of projects which still use the "legacy" dependency management tools, but since the package is relatively new, it uses the current standard "modules" approach. I'm glad the vendor workaround works!

yukels

comment created time in a month

issue commentelastic/go-elasticsearch

Dep ensure does not work

Hello, the project uses Go modules, and neither dep nor Govendor, the legacy solutions for Go dependency management are directly supported. However, with some clever acrobatics in the vendor directory, and cloning a specific version, it might be possible to work around the issue — see my comment here: https://github.com/elastic/go-elasticsearch/issues/145#issuecomment-615118305.

yukels

comment created time in a month

issue commentelastic/elasticsearch

[Transform] add throttling based on configuration parameter

Agreed that null values are hard to implement (and semantically somewhat empty), and that -1, especially for numerical values, makes more sense.

hendrikmuhs

comment created time in a month

push eventelastic/elasticsearch-py

Seth Michael Larson

commit sha 8b3447fd7e5d84d5070236e3322361ce31c14a51

Split CI into Jenkins and GitHub Actions

view details

Seth Michael Larson

commit sha cd85aa2ba31b2745ffa71d2edf6902400639bafc

Update changelog with 7.7.1 release

view details

Seth Michael Larson

commit sha db2a657a8b29ed973b2dd8a18fab40201b9da723

Add async helpers

view details

Seth Michael Larson

commit sha a2af5cc8742d06a0e8d01536de59b9fcb3ce8033

Initial commit of benchmarks

view details

Seth Michael Larson

commit sha ba91d3ead4bc07631f31849da0af06e4ae1252f4

Make benchmarks work with existing docker-compose setup

view details

Seth Michael Larson

commit sha 7f37663ef622319255f74a0fa1f6cc245278b94b

Refactor 'Operation' to 'Action', fix scripts

view details

Seth Michael Larson

commit sha b9788af0a9e22e9449a6b9ee574062ce5fd1de55

Add Jenkins config for running benchmarks

view details

Karel Minarik

commit sha 58187cd3554df1130249b8c63270c4ea51a5c937

Fix Jenkins config for running benchmarks * Fix the CLIENT_* environment variables * Fix `gcloud` configuration * Add Vault integration * Rename the Jenkins job

view details

Seth Michael Larson

commit sha 434b538fc5c5fb5a02643ebb071cf8f502dbf782

Update run-benchmarks.sh

view details

Karel Minarik

commit sha b841eb402d6214f4bf74a459a8aabcef0d775a00

Add event.outcome to benchmarks reporting

view details

push time in a month

push eventelastic/elasticsearch-ruby

Karel Minarik

commit sha 85c5cbfa33bec7fcd92b9a0bfbea99946cdb220e

Benchmarks: Add the client benchmark support This patch adds the runner for end-to-end client benchmarks. Usage: $ BUILD_ID=foo123 \ TARGET_SERVICE_TYPE=elasticsearch \ TARGET_SERVICE_NAME=elasticsearch \ TARGET_SERVICE_VERSION=8.0.0-SNAPSHOT \ TARGET_SERVICE_GIT_COMMIT=8e8ce967757929b1f71e6a9ce4cbfbf60407867b \ TARGET_SERVICE_OS_FAMILY=docker \ CLIENT_BRANCH=master \ CLIENT_COMMIT=$(git rev-parse --short HEAD) \ CLIENT_BENCHMARK_ENVIRONMENT=production \ DATA_SOURCE=/path/to/data/ \ ELASTICSEARCH_TARGET_URL=http://localhost:9200 \ ELASTICSEARCH_REPORT_URL=http://localhost:9200 \ FILTER=info \ bundle exec ruby run.rb

view details

Karel Minarik

commit sha b161c39cfdd2483d9cc8284c1f64d6a032295e2b

CI: Add Jenkins configuration for running benchmarks

view details

push time in a month

push eventelastic/go-elasticsearch

Karel Minarik

commit sha d4ec9cd13b5bc8612e82a7599e80dfde711e20fc

WIP > Add event.outcome to the reporting output

view details

push time in a month

push eventelastic/go-elasticsearch

Karel Minarik

commit sha ac6fcf9bcfd0d8e0899482cbd12d07de079bbca2

WIP > Optimize the reading of document body in index/bulk operations Do not read the files during the measured function execution, but read them in advance, in main.go.

view details

push time in a month

push eventelastic/go-elasticsearch

Karel Minarik

commit sha 5f3f5f0a4eaddf0ac78bb8ec0b068711cc4b539f

CI: Fix the run-benchmarks.sh script Add `git fetch && git reset` commands to auto-update the `elasticsearch-clients-benchmarks` repository.

view details

push time in a month

push eventelastic/go-elasticsearch

Karel Minarik

commit sha aac805c02b98667df39ff120eabf272cf9146e6d

CI: Fix the run-benchmarks.sh script Add `git fetch && git reset` commands to auto-update the `elasticsearch-clients-benchmarks` repository.

view details

push time in a month

push eventelastic/go-elasticsearch

Karel Minarik

commit sha 7b5bb5dc4959deace9a81ddfa8ab876862b5bc70

CI: Fix the run-benchmarks.sh script Add `git fetch && git reset` commands to auto-update the `elasticsearch-clients-benchmarks` repository.

view details

push time in a month

push eventelastic/go-elasticsearch

Karel Minarik

commit sha 9b31b8dfa96091bbcc223d8d5bc9e3759e6d0732

WIP > Increase the number of repetitions to match Python

view details

push time in a month

push eventelastic/elasticsearch-py

Seth Michael Larson

commit sha 04fee08b03c06deb56dd6a438418fd06061a0a8d

Update client examples for 2020-05-26

view details

Seth Michael Larson

commit sha 242adb0a397bc1dc55ddc334ddf10169635f1525

Add Index v2 and Voting Config APIs

view details

Seth Michael Larson

commit sha 4a65a541abae538d7bb3c80f7bfca1b2517af7ce

Add 7.7.0 release to master changelog

view details

Seth Michael Larson

commit sha f0ebc12717702e84bdff1ca491cc40681a44559d

Switch to Pytest as default runner

view details

Pat Lindley

commit sha 66e876d63086b28323f4c0f64da7799a59609a9e

Replace sys.exitfunc with atexit

view details

Seth Michael Larson

commit sha 3ca05b9fbe73fcfb1c54cbd250c44c3fc4f014fa

Add 7.8 job to Jenkins

view details

Seth Michael Larson

commit sha 45afaf349284c0056544fc72f2eab3fedc134dcd

Add support for 'allowed_warnings' feature

view details

Seth Michael Larson

commit sha c14c75ff72fa3cadee5ea10c32d46e63a8917f12

7.8 branch on the Jenkins job

view details

Seth Michael Larson

commit sha b75b89595975cfa7f2541e3fdd2af279037dc5a2

Add AIOHttpConnection

view details

Seth Michael Larson

commit sha 2435c90e4c01f18b8cb64796d4959b2b8a3864ff

Add AsyncTransport

view details

Seth Michael Larson

commit sha 76e3a39764efffad53cade8c312e74d93c91f4a8

Update API generator for async

view details

Seth Michael Larson

commit sha d49bf44654dd26367dd9357f3814f49d66eca6e5

Generate async API

view details

Seth Michael Larson

commit sha f0db9a4ab1dea65ecc1ad6768851193049973e7f

Add test suite for async API

view details

Seth Michael Larson

commit sha cb18a0a703a9a44b607066b4e0cb2add424d37e2

Use stacklevel for DeprecationWarnings

view details

Seth Michael Larson

commit sha b187c7a1e5fa0fa87df3196cdeb378b5520c9ca2

Raise warnings when receiving the 'Warning' header for AIOHttpConnection

view details

Seth Michael Larson

commit sha f7ff7d4a306c1f7d9c9dc1b2027ae23beda47b33

Initial commit of benchmarks

view details

Seth Michael Larson

commit sha a73293275627ae8fff29cfc7ff1031634967960f

Make benchmarks work with existing docker-compose setup

view details

Seth Michael Larson

commit sha ada6db9c0a10d239d9494a8463449d22795335af

Refactor 'Operation' to 'Action', fix scripts

view details

Seth Michael Larson

commit sha 8d80bb809d11701c2eb378cb35fb526900cc2599

Add Jenkins config for running benchmarks

view details

Karel Minarik

commit sha 7fa757cdf6ce0b94c59d2be82c21569cd37673f1

Fix Jenkins config for running benchmarks * Fix the CLIENT_* environment variables * Fix `gcloud` configuration * Add Vault integration * Rename the Jenkins job

view details

push time in a month

push eventelastic/elasticsearch-ruby

Karel Minarik

commit sha 223db3082171af62448bb994ff04d1be07a40544

Benchmarks: Add the client benchmark support This patch adds the runner for end-to-end client benchmarks. Usage: $ BUILD_ID=foo123 \ TARGET_SERVICE_TYPE=elasticsearch \ TARGET_SERVICE_NAME=elasticsearch \ TARGET_SERVICE_VERSION=8.0.0-SNAPSHOT \ TARGET_SERVICE_GIT_COMMIT=8e8ce967757929b1f71e6a9ce4cbfbf60407867b \ TARGET_SERVICE_OS_FAMILY=docker \ CLIENT_BRANCH=master \ CLIENT_COMMIT=$(git rev-parse --short HEAD) \ CLIENT_BENCHMARK_ENVIRONMENT=production \ DATA_SOURCE=/path/to/data/ \ ELASTICSEARCH_TARGET_URL=http://localhost:9200 \ ELASTICSEARCH_REPORT_URL=http://localhost:9200 \ FILTER=info \ bundle exec ruby run.rb

view details

Karel Minarik

commit sha b0f157fbe6ee1209128a9d884ecf6899cfe004ee

CI: Add Jenkins configuration for running benchmarks

view details

push time in a month

push eventelastic/go-elasticsearch

Karel Minarik

commit sha 5c47e50edb2e5ae95eded183e5b5bcf7acadb567

WIP > Increase the number of repetitions to match Python

view details

push time in a month

push eventelastic/go-elasticsearch

Karel Minarik

commit sha b24e0d29a1738dfc1515ee4ceab036c756182d42

API: Update the APIs for Elasticsearch 8.x (8541ef4f758)

view details

Karel Minarik

commit sha 53071fb3312d441d076522a7a7f4c8fc4020ac6a

API: Update the APIs for Elasticsearch 8.x (8a086ba05de)

view details

Karel Minarik

commit sha b05f73fe0dcf3e54a12befe5fa0796d2bf204c28

CI: Add the 7.8 configuration for Jenkins

view details

Karel Minarik

commit sha 91c05e0b824dc9b641298f2627b54b7a99b91edd

WIP > First sketch of the benchmarking runner

view details

Karel Minarik

commit sha 374393296af34e608cfc0a0757ee6908e2b70e97

WIP > Store a document for each iteration, not for the whole run

view details

Karel Minarik

commit sha c16766a84b6d4f5b00dab5a789706eb3bc486781

WIP > Use esutil.BulkIndexer for storing the statistics

view details

Karel Minarik

commit sha cefadc45eb2b3641132ced038f87c960f239f0a5

WIP > Add start time to Stats and use it as @timestamp

view details

Karel Minarik

commit sha f576fa7c0a0981fa19e790ef671e85ddb73828d1

WIP > Add configuration for ELASTICSEARCH_TARGET_URL and ELASTICSEARCH_REPORT_URL

view details

Karel Minarik

commit sha 1b94fa67fe3d72e06a365be4ae1f5d601d7c9652

WIP > Update the benchmark runner and executable

view details

Karel Minarik

commit sha ceb28f361dd725969a0953a17f2710812493a59a

WIP > Refactor the runner setup and executable output

view details

Karel Minarik

commit sha c03d4361184d1bf5769d7daa778dc28d99a1b121

WIP > Add information about OS, Git and runtime to the reported data

view details

Karel Minarik

commit sha f2c69e43463804c0d26471b98ca8d29cb54bf4f2

WIP > Add support for filtering the operations to run

view details

Karel Minarik

commit sha 4f651e8516519556bdce30a3758c2fd3d411c71c

WIP > Improve the debugging output when errors occur

view details

Karel Minarik

commit sha 2787f1f3d6a144249554510d4838358b266d2974

WIP > Add the "Get" operation to the list of benchmarks

view details

Karel Minarik

commit sha 7700bb7c46d8883ddf5630019760d107f97fa52a

WIP > Add support for passing repetition number to RunnerFunc() and the "Index()" benchmark

view details

Karel Minarik

commit sha 9c3cc25666a83a6cbbe36048be485c03601ced30

Dockerfile: Vendor benchmark dependencies during build

view details

Karel Minarik

commit sha 74cef9ea07515bae8eab5cfe5a9aade5708f947c

WIP > Update the schema for the event document

view details

Karel Minarik

commit sha e8b81851d9e7611ca60a71effe6b935f704b15f0

WIP > Read information about target and runner from environment and pass it to the runner

view details

Karel Minarik

commit sha f8348f9fc4a97e9235a68a97212290e6130f9aa9

WIP > Added "category" and "environment" to benchmark metadata and removed the overall run duration

view details

Karel Minarik

commit sha 0d99e1959521d7dac87116de2ef62c26c5ee8e04

WIP > Add consuming of res.Body in the benchmark operations

view details

push time in a month

push eventelastic/elasticsearch-ruby

Fernando Briano

commit sha fe1c97b66200c7bbce3e40087a965886efb2c246

[DOCS] Remove outdated example doc

view details

Fernando Briano

commit sha 0c1a3e6331ab0e818defbb78307b2c20b7b0ae21

[API] Fix bug for skipping versions

view details

Fernando Briano

commit sha 2b30923ff9a015741d7280c0574c4ccdc39dcfae

[XPACK] Generate indices endpoints

view details

Fernando Briano

commit sha 348efa988f409076dd266ed2a5a242c422f035f4

[XPACK] Adds searchable snapshots endpoints

view details

Fernando Briano

commit sha 68bed524198cdb92f216c3e960810dbef1b3317f

[DOCS] Update Changelog and Release Notes for 7.7

view details

Fernando Briano

commit sha 0ed38663f13187ad000c41f37874ad05756838a9

[CI] Remove 7.7 branch from Jenkins

view details

Fernando Briano

commit sha e8d7017baaad91b3e7e3152b13a1be42a481f055

[API] Renames indices.get_data_stream from indices.get_data_streams

view details

Fernando Briano

commit sha c51bbf1969d1837ce87439435f1c39da0767aa86

[CI] Test 7.8 branch on Jenkins

view details

Fernando Briano

commit sha 81590c4e9e0829acf3ab85dff569fa5b24c5e040

[API] Adds simulate template endpoint

view details

Fernando Briano

commit sha eb758548cbca49a18a0ce51d64c53df945d39082

[CLIENT] Adds Typhoeus 1.4, now compatible with Faraday 1.0

view details

Fernando Briano

commit sha cf3111fc355db1b88b0025856ea76c7e3971469b

[DOCS] Adds Typhoeus back with note for using 1.4

view details

Fernando Briano

commit sha ec47d069b0d1f1e52269f7cb116a1f008e58bae0

Whitespace cleanup

view details

Karel Minarik

commit sha 73cbf9bc0d8b19a6aac3e173e9f2324476c20f8b

Fix the ignoring of files for "docker build" The .dockerignore file has been ineffective in the `.ci` folder — it needs to be present to in the top-level folder, which is the build context. Also, some folders to be ignored have been added.

view details

Karel Minarik

commit sha 08d9a1717aa81090bec78ff073d5228a6635ccac

Benchmarks: Add the client benchmark support This patch adds the runner for end-to-end client benchmarks. Usage: $ BUILD_ID=foo123 \ TARGET_SERVICE_TYPE=elasticsearch \ TARGET_SERVICE_NAME=elasticsearch \ TARGET_SERVICE_VERSION=8.0.0-SNAPSHOT \ TARGET_SERVICE_GIT_COMMIT=8e8ce967757929b1f71e6a9ce4cbfbf60407867b \ TARGET_SERVICE_OS_FAMILY=docker \ CLIENT_BRANCH=master \ CLIENT_COMMIT=$(git rev-parse --short HEAD) \ CLIENT_BENCHMARK_ENVIRONMENT=production \ DATA_SOURCE=/path/to/data/ \ ELASTICSEARCH_TARGET_URL=http://localhost:9200 \ ELASTICSEARCH_REPORT_URL=http://localhost:9200 \ FILTER=info \ bundle exec ruby run.rb

view details

Karel Minarik

commit sha 649ae1b421c512aa366b02f940067a2d4dc8de64

CI: Add Jenkins configuration for running benchmarks

view details

push time in a month

push eventelastic/go-elasticsearch

Karel Minarik

commit sha a3beefb9eccfc16d458cafd6376efa71b582af15

WIP > Optimize benchmarking operations * Remove defer() for closing response bodies * Ensure response body is drained * Close sample data file after reading

view details

push time in a month

issue commentelastic/go-elasticsearch

slice scroll parallel support

The package provides access to the Scroll API (docs), so you can pass the slice parameter in the request body, as outlined in the example you've linked to.

As of now, there's no high-level helper for "scanning" an index, similar to the esutil.BulkIndexer helper, but it is planned for future versions.

hackerwin7

comment created time in a month

push eventelastic/elasticsearch-py

Karel Minarik

commit sha ce38f341dd02a3ef5dbef300c028dc15a71ddf1a

Fix Jenkins config for running benchmarks * Fix the CLIENT_* environment variables * Fix `gcloud` configuration * Add Vault integration * Rename the Jenkins job

view details

push time in a month

push eventelastic/elasticsearch-py

Karel Minarik

commit sha 7b39f682d5fb0053e47062b30964ea8354b99f9c

Fix Jenkins config for running benchmarks

view details

push time in a month

issue commentelastic/go-elasticsearch

Facing error while trying to extract package

Seems like you're using an older version of Go, the transport.clone is available since 1.13, see https://github.com/elastic/go-elasticsearch/issues/134

nikhilnfer

comment created time in a month

push eventelastic/go-elasticsearch

Karel Minarik

commit sha b05f73fe0dcf3e54a12befe5fa0796d2bf204c28

CI: Add the 7.8 configuration for Jenkins

view details

push time in a month

push eventelastic/go-elasticsearch

Karel Minarik

commit sha 53071fb3312d441d076522a7a7f4c8fc4020ac6a

API: Update the APIs for Elasticsearch 8.x (8a086ba05de)

view details

push time in a month

issue commentelastic/go-elasticsearch

Security Issue with %q?

In general, taking in the input and sending it in the multi_match query is safe. I've mentioned a "malicious payload" because I can imagine somebody can come up with some sort of "unicode bomb" — unparseable content, monstrously big content, etc. Some lightweight sanitation like limiting the allowed length of the query (a regular query won't have 1,000,000 characters) or limiting the Unicode range (which is hard to do, though) might make sense. But a fulltext query is not susceptible to a regular SQL injection attack in the "little bobby tables" sense.

How would you expose a search API that takes some words as query parameter from a user?

The example you've found is actually a pretty solid way of solving that: just feed it to a match or multi_match query. Using strconv.Quote() might improve it a bit.

moon-bits

comment created time in a month

issue commentelastic/go-elasticsearch

Security Issue with %q?

There is no danger of "query injection" with the multi_match query, eg. it doesn't expand wildcards. Of course, user-provided input always needs to be treated carefully — I imagine it would be possible to send a malformed payload there.

Note that things are different with the query_string query, which can be quite easily used for stressing the cluster. Things get really dangerous if people could pass their own scripts, but that's not something possible out-of-the-box, unless you allow them to do it.

moon-bits

comment created time in a month

push eventelastic/go-elasticsearch

Karel Minarik

commit sha b24e0d29a1738dfc1515ee4ceab036c756182d42

API: Update the APIs for Elasticsearch 8.x (8541ef4f758)

view details

push time in 2 months

issue commentelastic/go-elasticsearch

Getting error "read on closed response body" when trying to delete doc that does not exist

Hello, sorry for not getting to you sooner.

When you discard the body and close it, it is expected that the subsequent call to res.String() fails, because it actually tries to read a closed body. I think it's perhaps better to add the discard and close into a defer statement in your case?

eslam-mahmoud

comment created time in 2 months

issue closedelastic/go-elasticsearch

Scroll example

Dear Everyone,

i 'm looking for an example go implement of scroll pagination.

Thank you

closed time in 2 months

tabvn

issue commentelastic/go-elasticsearch

Scroll example

Hello, thanks for the example! There are plans to add a helper for scrolling, similar to esutil.BulkIndexer, in the future versions.

tabvn

comment created time in 2 months

issue closedelastic/go-elasticsearch

Support for voting configuration exclusions API

I'm using go-elasticsearch 7.x branch.

I need to perform both POST and DELETE requests using /_cluster/voting_config_exclusions API endpoint.

POST /_cluster/voting_config_exclusions/nodeId1

DELETE /_cluster/voting_config_exclusions

But I didn't find any implementation of voting_configuration_exclusions API in Cluster APIs:

type Cluster struct {
	AllocationExplain       ClusterAllocationExplain
	DeleteComponentTemplate ClusterDeleteComponentTemplate
	ExistsComponentTemplate ClusterExistsComponentTemplate
	GetComponentTemplate    ClusterGetComponentTemplate
	GetSettings             ClusterGetSettings
	Health                  ClusterHealth
	PendingTasks            ClusterPendingTasks
	PutComponentTemplate    ClusterPutComponentTemplate
	PutSettings             ClusterPutSettings
	RemoteInfo              ClusterRemoteInfo
	Reroute                 ClusterReroute
	State                   ClusterState
	Stats                   ClusterStats
}

Can you guide a way out by telling how can I perform these requests using go-elasticsearch?

closed time in 2 months

kamolhasan

issue closedelastic/go-elasticsearch

Update example

Hi,

As far as I could use your package, I was able to insert a document (with two fields, one unique, one that can be anything)

Now, I would like to update these fields, how can I do? On internet I saw stuff about a script but I don't really get it..

my struct:

type User struct {
	Id   string `json:"id,omitempty"`
	Pseudo string ` json:"pseudo,omitempty"`
	FullName string `json:"full_name,omitempty"`
}

As for now, I have the following code:

const indexUsers = "users"

type ESUtils struct {
    client *elasticsearch.Client
}

func (es *ESUtils) UpdateUsername(userId, username string) error {

	var buf bytes.Buffer
	queryMap := map[string]interface{}{
		// ?????????????
	}
	if err := json.NewEncoder(&buf).Encode(queryMap); err != nil {
		log.Fatalf("Error encoding query: %s", err)
	}
	// Perform the update request.
	res, err := es.client.Update(
		indexUsers,
		userId,
		&buf,
		es.client.Update.WithPretty())
	if err != nil {
		log.Fatalf("Error getting response: %s", err)
	}
	defer res.Body.Close()

	if res.IsError() {
		log.Printf("[%s] Error indexing document ID=%s", res.Status(), userId)
	} else {
		// Deserialize the response into a map.
		var r map[string]interface{}
		if err := json.NewDecoder(res.Body).Decode(&r); err != nil {
			log.Printf("Error parsing the response body: %s", err)
		} else {
			// Print the response status and indexed document version.
			log.Printf("[%s] %s; version=%d", res.Status(), r["result"], int(r["_version"].(float64)))
		}
	}

	return nil
}

Thanks for any help

closed time in 2 months

Emixam23

issue commentelastic/go-elasticsearch

bulk index: disk increases, documents don't

(...) see disk mb going up by about 190mb, and document count going up by 1 (...)

Is it possible that you're setting the document ID to the same value, so you're basically overwriting the same document over and over?

AndrewSB

comment created time in 2 months

issue commentelastic/go-elasticsearch

bulk index: disk increases, documents don't

Hello, I don't see anything wrong in the code you've posted by reading it. It is indeed strange that OnFailure() is not called for the failed items.

Can you please provide information about the Elasticsearch version? Also, how is the cluster deployed (local installation, custom installation, hosted, ...)?

In order to debug the failures, can you enable logging in the client? The following example shows how:

https://github.com/elastic/go-elasticsearch/blob/bbd6b17000b471214c7fbc7ee79e1a6299fe7797/_examples/logging/default.go#L52-L58

For millions of items, it's gonna be a lot of output, but hopefully you can try with less items?

AndrewSB

comment created time in 2 months

issue closedelastic/go-elasticsearch

The client does not reuse HTTP connections

Greetings,

The documentation (and issue tracker, see #118) seems to imply that HTTP connections should be reused, but that doesn't seem to be the case. Here is a small app:

package main

import (
	"context"
	"fmt"
	"log"
	"strings"

	"github.com/elastic/go-elasticsearch/v7"
	"github.com/elastic/go-elasticsearch/v7/esapi"
)

func main() {
	es, err := elasticsearch.NewClient(elasticsearch.Config{
		Addresses: []string{"http://localhost:9200"},
		//Transport: &http.Transport{MaxIdleConns: 8, MaxIdleConnsPerHost: 8, MaxConnsPerHost: 16, IdleConnTimeout: 10 * time.Second},
	})
	if err != nil {
		panic(err)
	}
	for i := 0; i < 1000; i++ {
		func() {
			log.Printf("Saving %d\n", i)
			req := esapi.IndexRequest{
				Index:      "test",
				DocumentID: fmt.Sprintf("%d", i),
				Body:       strings.NewReader(`{}`),
				Refresh:    "true",
			}
			res, err := req.Do(context.Background(), es)
			if err != nil {
				log.Println(err)
			}
			defer func() {
				err := res.Body.Close()
				if err != nil {
					log.Println(err)
				}
			}()
			if res.IsError() {
				log.Println(res.Status())
			}
		}()
	}
}

Before running the app netstat -an | grep 9200 shows:

tcp4       0      0  127.0.0.1.9200         *.*                    LISTEN     
tcp6       0      0  ::1.9200               *.*                    LISTEN     

If I then execute netstat -an | grep 9200 | awk '{print $6}' | sort | uniq -c while the app is running, the number of connections in TIME_WAIT state keeps growing. Running it immediately after the app finishes:

   2 LISTEN
1000 TIME_WAIT

Reproduced on both linux and MacOS with Elasticsearch 7.5.1, go 1.13.6 and elastic/go-elasticsearch 7.5.0.

Many thanks, damjan

closed time in 2 months

SamuraiPrinciple

issue commentelastic/go-elasticsearch

The client does not reuse HTTP connections

@turboezh , right, err is only returned when the response cannot be retrieved (host down, etc).

Have a look at _examples/bulk, it has a lot of code for usual scenarios, and an example of the esutil.BulkIndexer helper, which is preferable to calling the Bulk API manually.

SamuraiPrinciple

comment created time in 2 months

issue commentelastic/go-elasticsearch

The client does not reuse HTTP connections

I've added a note about closing and consuming the reponse body in order to re-use connections into the README in bbd6b170 — thanks for raising the issue.


@turboezh Curious about a use case where you send a bulk request, but you're not interested in the response, ie. you don't care if documents have been indexed succesfully or not?

SamuraiPrinciple

comment created time in 2 months

push eventelastic/go-elasticsearch

Karel Minarik

commit sha bbd6b17000b471214c7fbc7ee79e1a6299fe7797

README: Add note about persistent HTTP connections Also update the example and add esutil.BulkIndexer to the list of helpers.

view details

Karel Minarik

commit sha 1d59d3118cfa16461c5298c1eaed099fb6d305b4

WIP > First sketch of the benchmarking runner

view details

Karel Minarik

commit sha 2ee73c7c8c1e77af5708a6079dd6b5b51b19ff88

WIP > Store a document for each iteration, not for the whole run

view details

Karel Minarik

commit sha 02d21cd6237712cd737a2ea1d56f3615e4e16265

WIP > Use esutil.BulkIndexer for storing the statistics

view details

Karel Minarik

commit sha a77b08f9e927c9215cb6b3b2bf5c5f617b6fa7e2

WIP > Add start time to Stats and use it as @timestamp

view details

Karel Minarik

commit sha 9c110b03609caf136f372db4a6489c26bd06dc2f

WIP > Add configuration for ELASTICSEARCH_TARGET_URL and ELASTICSEARCH_REPORT_URL

view details

Karel Minarik

commit sha 066d2d1ac22484a518f264a57ff161fa206f17b2

WIP > Update the benchmark runner and executable

view details

Karel Minarik

commit sha 537c51306d7fbcf62a59fc488982365a803e7d00

WIP > Refactor the runner setup and executable output

view details

Karel Minarik

commit sha 452e20b37211183152398393d04918e3ae30f6dd

WIP > Add information about OS, Git and runtime to the reported data

view details

Karel Minarik

commit sha 81bdc1de5779e1cc141b14bfcf4b70a56a794b2b

WIP > Add support for filtering the operations to run

view details

Karel Minarik

commit sha 2748e2fa79d9344d164456a5235bdd744bb995b6

WIP > Improve the debugging output when errors occur

view details

Karel Minarik

commit sha 5a2b24dffb34af3fe2bd5b2b3acdb6a25882a6b5

WIP > Add the "Get" operation to the list of benchmarks

view details

Karel Minarik

commit sha 4bcb1295b52e6d694f49d1870a6e66505a3341f5

WIP > Add support for passing repetition number to RunnerFunc() and the "Index()" benchmark

view details

Karel Minarik

commit sha 3eaf4279822f04d7ffe30355c79983d8052e1015

Dockerfile: Vendor benchmark dependencies during build

view details

Karel Minarik

commit sha e03266fc057554239e92d31ebabab687304f011a

WIP > Update the schema for the event document

view details

Karel Minarik

commit sha c95b79268c583ed5c4bb1eaf183e7e0c5aae0927

WIP > Read information about target and runner from environment and pass it to the runner

view details

Karel Minarik

commit sha ba629a2e8044a29588361154713cc1f3b0fa35c5

WIP > Added "category" and "environment" to benchmark metadata and removed the overall run duration

view details

Karel Minarik

commit sha 7380ab757e783e2ab1dfdb8b9104f987e1e84f41

WIP > Add consuming of res.Body in the benchmark operations

view details

Karel Minarik

commit sha 757bae3006c653ed3ab8e89e6054894bb15b4398

WIP > Add the client name to the labels

view details

Karel Minarik

commit sha 21d5f36e574187681d86c2ac45c2e3d26867475d

WIP > Move the main executable to "cmd"

view details

push time in 2 months

push eventelastic/go-elasticsearch

Karel Minarik

commit sha 7598ddca6c0d0a2a652ac12273c9aa385ab46cc4

README: Add note about persistent HTTP connections Also update the example and add esutil.BulkIndexer to the list of helpers. (cherry picked from commit bbd6b17000b471214c7fbc7ee79e1a6299fe7797)

view details

push time in 2 months

push eventelastic/go-elasticsearch

Karel Minarik

commit sha db2bcf51489c50bf01ba9cf7e33248f221e5b76e

README: Add note about persistent HTTP connections Also update the example and add esutil.BulkIndexer to the list of helpers. (cherry picked from commit bbd6b17000b471214c7fbc7ee79e1a6299fe7797)

view details

push time in 2 months

push eventelastic/go-elasticsearch

Karel Minarik

commit sha bbd6b17000b471214c7fbc7ee79e1a6299fe7797

README: Add note about persistent HTTP connections Also update the example and add esutil.BulkIndexer to the list of helpers.

view details

push time in 2 months

push eventelastic/go-elasticsearch

Karel Minarik

commit sha b851dbba009ae9bb470a3792fd2ebaf7c629813f

CI: Update the scripts for running the API tests on Jenkins Use the common scripts for launching the Elasticsearch cluster and executing up the tests.

view details

Karel Minarik

commit sha 188dd0c2e719263cafffe5e659c00f374ff1d653

WIP > First sketch of the benchmarking runner

view details

Karel Minarik

commit sha 1c47934453062410e3beeb6ce63939ed87ddded4

WIP > Store a document for each iteration, not for the whole run

view details

Karel Minarik

commit sha 4b50ad33ba4771a5491b3bd578fadadaad1c90bb

WIP > Use esutil.BulkIndexer for storing the statistics

view details

Karel Minarik

commit sha 4cb3c612c62833fde886af73e04b6e53fcc42b24

WIP > Add start time to Stats and use it as @timestamp

view details

Karel Minarik

commit sha 01e852558941020e859adf17c4529eb4a58c31d5

WIP > Add configuration for ELASTICSEARCH_TARGET_URL and ELASTICSEARCH_REPORT_URL

view details

Karel Minarik

commit sha 315d261abbb85f468b40e099f279fdf9a6651a3a

WIP > Update the benchmark runner and executable

view details

Karel Minarik

commit sha 9edf2d98e7b2fc8b876649fc4c9170e3be5bd177

WIP > Refactor the runner setup and executable output

view details

Karel Minarik

commit sha c0b1c7fd51f58ce367457b724bf8219f0fa34660

WIP > Add information about OS, Git and runtime to the reported data

view details

Karel Minarik

commit sha 7083e4267e48ee3f8ee31151871e8048b2b60a5a

WIP > Add support for filtering the operations to run

view details

Karel Minarik

commit sha c4a6219a4f4d853b16a7f4c897aa951bab8e6bb7

WIP > Improve the debugging output when errors occur

view details

Karel Minarik

commit sha 14627f42ec75fa8c5bf164d079c13009aa231d25

WIP > Add the "Get" operation to the list of benchmarks

view details

Karel Minarik

commit sha 662de859099c468c5a7b7f8ace52c548ded2252b

WIP > Add support for passing repetition number to RunnerFunc() and the "Index()" benchmark

view details

Karel Minarik

commit sha b4681cf85c67366ec622107ad996fd84cdeed6c3

Dockerfile: Vendor benchmark dependencies during build

view details

Karel Minarik

commit sha 25a11d62222f0fa8661cc8013053563d7a8f7522

WIP > Update the schema for the event document

view details

Karel Minarik

commit sha 95b6cbbfd1bfe24b1e6a4daada8f2814f7899abf

WIP > Read information about target and runner from environment and pass it to the runner

view details

Karel Minarik

commit sha a30ba4d781c38a874c8e02f496cfa047956aefe9

WIP > Added "category" and "environment" to benchmark metadata and removed the overall run duration

view details

Karel Minarik

commit sha 51f9933c2c52937a215baf72ef6640a9c1ed1884

WIP > Add consuming of res.Body in the benchmark operations

view details

Karel Minarik

commit sha 49272d3795c706b8ed704ea917175d76bab8398d

WIP > Add the client name to the labels

view details

Karel Minarik

commit sha 9a975c95d3f127a0e942e04b389f9c463a332d63

WIP > Move the main executable to "cmd"

view details

push time in 2 months

push eventelastic/go-elasticsearch

Karel Minarik

commit sha b851dbba009ae9bb470a3792fd2ebaf7c629813f

CI: Update the scripts for running the API tests on Jenkins Use the common scripts for launching the Elasticsearch cluster and executing up the tests.

view details

push time in 2 months

push eventelastic/go-elasticsearch

Karel Minarik

commit sha 84051fb420a9a6abe71d1638505f5837c4ddbedf

CI: Update the scripts for running the API tests on Jenkins Use the common scripts for launching the Elasticsearch cluster and executing up the tests.

view details

Karel Minarik

commit sha 14ba12f5c2c65dcbce98745cec2955db255753dc

WIP > First sketch of the benchmarking runner

view details

Karel Minarik

commit sha 1720e7ba3ab04210222b376d4be188b6ae9720d0

WIP > Store a document for each iteration, not for the whole run

view details

Karel Minarik

commit sha 88d16c2a11ba5f82c054f00ebab5463bd2a813e7

WIP > Use esutil.BulkIndexer for storing the statistics

view details

Karel Minarik

commit sha 338e95e88daa20dfe30e4ee6489ff65e90f33f2c

WIP > Add start time to Stats and use it as @timestamp

view details

Karel Minarik

commit sha eab4a5e4df9db02024fa0d0d195d6ac58c51cb60

WIP > Add configuration for ELASTICSEARCH_TARGET_URL and ELASTICSEARCH_REPORT_URL

view details

Karel Minarik

commit sha bf9390209eaafcf3dc0835005993dbc08ab97453

WIP > Update the benchmark runner and executable

view details

Karel Minarik

commit sha d527367af6666b0296e52d8563730e1b0a888b21

WIP > Refactor the runner setup and executable output

view details

Karel Minarik

commit sha a098ccc7c2af1e82b9abc06b820383ea1c2d48f7

WIP > Add information about OS, Git and runtime to the reported data

view details

Karel Minarik

commit sha 5004867f3a14121f12bcbd4a168a9feb423aad84

WIP > Add support for filtering the operations to run

view details

Karel Minarik

commit sha 6797ca75d8ac4cdd52af71ef844694159422fe89

WIP > Improve the debugging output when errors occur

view details

Karel Minarik

commit sha 5f8625e56a627926646e9ba77afd852197f793dd

WIP > Add the "Get" operation to the list of benchmarks

view details

Karel Minarik

commit sha 3d21e10fd7663ec799e9b06dccb64682a10b3fe3

WIP > Add support for passing repetition number to RunnerFunc() and the "Index()" benchmark

view details

Karel Minarik

commit sha 12c4ece4aafde8823b106295e363adeb249626ee

Dockerfile: Vendor benchmark dependencies during build

view details

Karel Minarik

commit sha 4aab54791ce8fe74e31ba461a658e77ed10eb43b

WIP > Update the schema for the event document

view details

Karel Minarik

commit sha a2a5c3d9aab3473a05cc3d9d10da8473e005cb3d

WIP > Read information about target and runner from environment and pass it to the runner

view details

Karel Minarik

commit sha db3b41b724ad693ff7936cbe1f89b55ce2e62104

WIP > Added "category" and "environment" to benchmark metadata and removed the overall run duration

view details

Karel Minarik

commit sha 22f8cd41e5a4149017d54befff6cdb7142ff0c85

WIP > Add consuming of res.Body in the benchmark operations

view details

Karel Minarik

commit sha df202288a566243ea4c2638a1642932b48192950

WIP > Add the client name to the labels

view details

Karel Minarik

commit sha ed62214aa87ad6ea9caa93893cf6a53b6fe011f2

WIP > Move the main executable to "cmd"

view details

push time in 2 months

more