profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/mrshu/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.

chalice-dev/awesome-chalice 113

☁️ Awesome Chalice: Community list of resources about AWS Chalice, a Python framework for writing serverless applications.

mrshu/brutal-plugins 4

A set of plugins for brutal, the mighty chatbot

Adman/ddg-gadget 2

Windows gadget for displaying 0clickinfo from DuckDuckGo

Adman/road-segmentation 0

Binary pixel-wise segmentation for predicting driveable path

fmfi-genomika/genomikaMalGlo 0

Malassezia globosa

mrshu/24pullrequests 0

Giving back little gifts of code for Christmas

mrshu/ack2 0

ack 2.0 is a greplike tool optimized for programmers searching large heterogeneous trees of source code.

issue openedsqlfluff/sqlfluff

Python Typing for all

-- Please add any additional labels that are related to the issue before submitting! -- The functions in the rules don't have typing.For consistency all functions should have typing.

created time in an hour

issue openedsqlfluff/sqlfluff

Number of threads configurable in .sqlfluff

Being able to set the number of threads to run with in .sqlfluff might be useful to avoid having to pass it in the CLI every time.

created time in 2 hours

issue openedsqlfluff/sqlfluff

--threads instead of --parallel?

I feel that --threads instead of --parallel is clearer in terms of what the input value does. We can also then change the help string to specify that the number passed is the number of threads used.

created time in 2 hours

issue openedsqlfluff/sqlfluff

Parallel linting is un-killable

sqlfluff 0.6.0a1

running lint --parallel 4.

Expected outcome.

While running if I press ctrl+c to abort the process it should stop gracefully.

What actually happens.

I get a traceback output but the process doesn't actually stop. Eventually I have to kill the shell to get it to exit.

created time in 3 hours

created tagsqlfluff/sqlfluff

tag0.6.0a1

A SQL linter and auto-formatter for Humans

created time in 4 hours

push eventsqlfluff/sqlfluff

Niall Woodward

commit sha c68f9ee84e534c00d35085dcde74f926890631ce

0.6.0a1 Release (#1054)

view details

push time in 4 hours

pull request commentsqlfluff/sqlfluff

Send all logging to stderr

I thought that click.echo() was supposed to handle some of this complexity? I don't have a full view into all the reasons that we use logging, but I will say that it is extremely annoying to get tests (notably, test__cli__command_lint_warning_explicit_file_ignored) to pass under this stderr stream handler, given that pytest and click are both trying to intercept output.

Does #1037 go deeper than I thought? Does it cut down to our philosophy about whether fluff's output is data or logging?

Another perspective: we could implement my solution 1 via a logging method on the linter class (linter.log(...)) which includes conditional logic given the config or filename.

nolanbconaway

comment created time in 5 hours

pull request commentsqlfluff/sqlfluff

Send all logging to stderr

Forgot the link: https://blog.finxter.com/what-is-python-output-buffering-and-how-to-disable-it/

nolanbconaway

comment created time in 5 hours

pull request commentsqlfluff/sqlfluff

Send all logging to stderr

I think this is a good idea. We may want to use one of the techniques here to ensure that stuff written to stdout is flushed before any subsequent writes to stderr. Otherwise, I think we could see some weird intermingling of the two.

Reason: By default, stdout is buffered, but stderr is unbuffered.

nolanbconaway

comment created time in 5 hours

PR opened sqlfluff/sqlfluff

Send all logging to stderr

A proposal to fix https://github.com/sqlfluff/sqlfluff/issues/1037: send all logging to stderr.

The options as I saw it were:

  1. (annoying) whenever logging is being done, check if stdin is being used for input. if so, do not log.
  2. (more annoying) whenever logging is being done, check if stdin is being used for input. if so, configure the logger stream to stderr instead of stdout.
  3. (least annoying) send everything to stderr.

I can see the places where the issue in #1037 arises and can easily implement solution 1 there, but this doesn't fix the general issue that we are using stdout for output AND logging at times.

I don't know about the use cases of loggers enough to know if everything oughtn't to go to stderr, would love some more expert perspectives!

+4 -0

0 comment

1 changed file

pr created time in 5 hours

pull request commentsqlfluff/sqlfluff

0.6.0a1 Release

Looks good to me! I just merged a fix for #1051, so we may want to include it in the change log.

Could you re approve pls?

NiallRees

comment created time in 5 hours

create barnchsqlfluff/sqlfluff

branch : nolan/send-logging-to-stderr

created branch time in 5 hours

push eventsqlfluff/sqlfluff

Barry Hart

commit sha 7905f24ad75b0f7577b31c4b8232d71408f90114

Clean up some uses of "# type: ignore"

view details

Barry Hart

commit sha 84561daf961c4f618201638110e123180efee163

Fix error in parser.py

view details

Barry Hart

commit sha b44d082070fe1f1746b1b8e06c9f20780f482e18

Issue 1051: Add support for binary operators

view details

Barry Hart

commit sha c170f3221cea7346cc13f21730169167a8701f03

Fix some lexing issues

view details

Barry Hart

commit sha 34032ad5d05a95ab3a0f2211bcfb1920b8c9294e

Move binary shift tokens before comparison operators in search order

view details

Barry Hart

commit sha 02c494c59827fe43753505a4fc89d4f0ed6ef3b5

Shift operators

view details

Barry Hart

commit sha bea1f6bbf44d258f7ec2797d9b36a7ef64f3b769

Rename Binary -> Bitwise

view details

Barry Hart

commit sha bfff2ff914c565820500d77fa465627c6f436c4c

Run Black, fix accidental renaming

view details

Barry Hart

commit sha cd2f90c447ef8c8cae7fa4b516e895bc5919e32c

Create explicit segment classes for the bitwise left and right shift operators

view details

Barry Hart

commit sha 730413150ab69d00f2222683bcf1513b57d6cffe

Merge pull request #1053 from barrywhart/bhart-issue_1051_binary_operators Issue 1051: Add support for binary operators

view details

Barry Hart

commit sha 4c1e24afa20b0503cc1824260b6f14c54a24119e

Merge branch 'master' into bhart-typing_tweaks

view details

Barry Hart

commit sha 50ae3188e195cc5808ac8350da336139d32e4743

Merge branch 'master' into bhart-typing_tweaks

view details

Barry Hart

commit sha 90bc507e1245faf492a1051570d3e62d4c5b6278

Merge branch 'bhart-typing_tweaks' of https://github.com/barrywhart/sqlfluff into bhart-typing_tweaks

view details

Barry Hart

commit sha 58f237170c24b1d8e9f0ab450b6ea24996d353c7

Update changelog to mention adding support for bitwise operators

view details

Barry Hart

commit sha 2577fee3253c3f40ce87fe5d241a6891a98fb84e

Merge pull request #1052 from barrywhart/bhart-typing_tweaks Clean up some uses of "# type: ignore"

view details

Niall Woodward

commit sha eebe9b467a27fbe32483e9c09acf1824fea76d44

Merge branch 'master' into 0.6.0a1

view details

push time in 5 hours

pull request commentsqlfluff/sqlfluff

0.6.0a1 Release

Merged!

NiallRees

comment created time in 6 hours

push eventsqlfluff/sqlfluff

Barry Hart

commit sha 7905f24ad75b0f7577b31c4b8232d71408f90114

Clean up some uses of "# type: ignore"

view details

Barry Hart

commit sha 84561daf961c4f618201638110e123180efee163

Fix error in parser.py

view details

Barry Hart

commit sha 4c1e24afa20b0503cc1824260b6f14c54a24119e

Merge branch 'master' into bhart-typing_tweaks

view details

Barry Hart

commit sha 50ae3188e195cc5808ac8350da336139d32e4743

Merge branch 'master' into bhart-typing_tweaks

view details

Barry Hart

commit sha 90bc507e1245faf492a1051570d3e62d4c5b6278

Merge branch 'bhart-typing_tweaks' of https://github.com/barrywhart/sqlfluff into bhart-typing_tweaks

view details

Barry Hart

commit sha 58f237170c24b1d8e9f0ab450b6ea24996d353c7

Update changelog to mention adding support for bitwise operators

view details

Barry Hart

commit sha 2577fee3253c3f40ce87fe5d241a6891a98fb84e

Merge pull request #1052 from barrywhart/bhart-typing_tweaks Clean up some uses of "# type: ignore"

view details

push time in 6 hours

PR merged sqlfluff/sqlfluff

Reviewers
Clean up some uses of "# type: ignore"

No functional changes here, just small changes to reduce the use of # type: ignore

+14 -7

1 comment

4 changed files

barrywhart

pr closed time in 6 hours

pull request commentsqlfluff/sqlfluff

0.6.0a1 Release

Ok, I updated the change log in my PR #1052. Will merge that in a few minutes if the build passes. 🤞🏽

NiallRees

comment created time in 6 hours

pull request commentsqlfluff/sqlfluff

0.6.0a1 Release

Mind if I include it in #1052? It's not related, but you'd approved that, and I was about to merge it once the build passes.

NiallRees

comment created time in 6 hours

Pull request review commentsqlfluff/sqlfluff

Clean up some uses of "# type: ignore"

 def fix_string(self) -> Tuple[Any, bool]:         bencher("fix_string: start")          linter_logger.debug("Original Tree: %r", self.templated_file.templated_str)-        linter_logger.debug("Fixed Tree: %r", self.tree.raw)  # type: ignore+        assert self.tree

Sweet thanks for the link 👍

barrywhart

comment created time in 6 hours

pull request commentsqlfluff/sqlfluff

0.6.0a1 Release

Looks good to me! I just merged a fix for #1051, so we may want to include it in the change log.

Would you mind doing that? I'll then merge and make a release.

NiallRees

comment created time in 6 hours

Pull request review commentsqlfluff/sqlfluff

Clean up some uses of "# type: ignore"

 def fix_string(self) -> Tuple[Any, bool]:         bencher("fix_string: start")          linter_logger.debug("Original Tree: %r", self.templated_file.templated_str)-        linter_logger.debug("Fixed Tree: %r", self.tree.raw)  # type: ignore+        assert self.tree

Ah, I found an official mention of this in the MyPy docs:

Sometimes mypy doesn’t realize that a value is never None. This notably happens when a class instance can exist in a partially defined state, where some attribute is initialized to None during object construction, but a method assumes that the attribute is no longer None. Mypy will complain about the possible None value. You can use assert x is not None to work around this in the method

barrywhart

comment created time in 6 hours

Pull request review commentsqlfluff/sqlfluff

Clean up some uses of "# type: ignore"

 def fix_string(self) -> Tuple[Any, bool]:         bencher("fix_string: start")          linter_logger.debug("Original Tree: %r", self.templated_file.templated_str)-        linter_logger.debug("Fixed Tree: %r", self.tree.raw)  # type: ignore+        assert self.tree

Previously, mypy would complain because self.tree is an Optional value (i.e. could be None). Mypy pays attention to assertions, so when we assert, this guarantees the value is not None, therefore the types match.

I can't find any docs explaining this exact case, but see the "Note" on this page: it's pretty similar, although it uses isinstance().

https://mypy.readthedocs.io/en/stable/casts.html#casts-and-type-assertions

barrywhart

comment created time in 6 hours

pull request commentsqlfluff/sqlfluff

0.6.0a1 Release

Looks good to me! I just merged a fix for #1051, so we may want to include it in the change log.

NiallRees

comment created time in 6 hours

issue commentsqlfluff/sqlfluff

Improve the performance of sqlfluff

#642 has been addressed, adding support for running lint and fix on multiple processors. This should help with performance issues. Leaving this issue open, though, as I think there are still some good opportunities to improve raw performance. Also see #1046 for another proposed performance improvement.

dclong

comment created time in 6 hours

pull request commentsqlfluff/sqlfluff

Now raising error when old capitalisation_policy config is used

@panasenco: Do you think you'll have time to finish this PR?

I think the additiona idea you mentioned is good:

general rule that throws errors whenever extraneous configurations are provided in any rule class?

However, I suggest addressing that as a different issue/PR, so we can go ahead and get this merged.

panasenco

comment created time in 6 hours

issue closedsqlfluff/sqlfluff

L014 linting and fixing "Inconsistent capitalisation of unquoted identifiers" that do not exist

Expected Behaviour

If the SQL has consistent unquoted identifiers that matches the policy in the .sqlfluff config, for example:

[sqlfluff:rules:L014]  # Unquoted identifiers
capitalisation_policy = lower
unquoted_identifiers_policy = all

Then sqlfluff should not throw any L014 | Inconsistent capitalisation of unquoted identifiers errors and should not apply any L014 "fixes" (since they do not exist).

Observed Behaviour

  1. sqlfluff lint sl_web_spot_pages.sql is finding L014 | Inconsistent capitalisation of unquoted identifiers linting errors that do not exist.
  2. sqlfluff fix sl_web_spot_pages.sql is "fixing" L014 | Inconsistent capitalisation of unquoted identifiers errors that do not exist. These "fixes" are not desired and result in inconsistent casing throughout the "fixed" SQL. I want to maintain lower casing throughout my dbt models.

Steps to Reproduce

Version

I have installed sqlfluff from source using pip install git+https://github.com/sqlfluff/sqlfluff.git (currently on commit: 9970bf2e7001b23dc2f84c5ec572a28cd92099f6) so that I can take advantage of recent PRs (#868 and #964) instead of having to wait for the next release.

<details> <summary>Source dbt model SQL</summary> <p>

-- filename: sl_web_spot_pages.sql

{{
	config(
        materialized = 'incremental',
        sort = 'received_at',
        unique_key = 'page_id'
	)
}}

with user_entitlement as (

    select * from {{ ref('sl_web_user_entitlement_base') }}

),

has_cam_entitlement as (

    select * from {{ ref('spots_has_cam_map') }}

),

pages as (

    select * from {{ ref('base_web_pages') }}

),

pages_xf as (

    select
        pages.received_at,
        pages.page_id,
        pages.spot_id,
        pages.user_id,
        pages.anonymous_id,
        pages.cam_id,
        pages.is_single_cam,
        pages.is_mobile_view,
        pages.cam_name_primary,
        user_entitlement.subscription_entitlement

    from pages
    left join user_entitlement
        on pages.user_id = user_entitlement.user_id
            and pages.received_at >= user_entitlement.entitlement_from
            and pages.received_at < user_entitlement.entitlement_to
    where pages.spot_id is not null
        and pages.name = 'Spot Report'

        -- Remove multicam favorites page hits for now due to varying # of cams on page (user selected)
        -- and pages.context_page_url != 'https://www.surfline.com/surf-cams'
        /*
        2019-12-28
        There are only 13 records of the above "&" scenario - which is immaterial.
        This might not be worth the compute time.

        - Julian
        */

    {% if is_incremental() %}

        -- Only get new data (not including today's data)
        and pages.received_at > (select max (received_at) from {{ this }} )
        and pages.received_at < current_date

    {% else %}

        and pages.received_at >= date_trunc('year', current_date - 1) - interval '1 year'

    {% endif %}

),

final as (

    select
        pages_xf.received_at,
        pages_xf.page_id,
        pages_xf.spot_id,
        has_cam_entitlement.has_cam,
        pages_xf.user_id,
        pages_xf.anonymous_id,
        pages_xf.cam_id,
        pages_xf.is_single_cam,
        pages_xf.is_mobile_view,
        pages_xf.cam_name_primary,
        pages_xf.subscription_entitlement

    from pages_xf
    -- inner join because some spot_ids not in MongoDB
    inner join has_cam_entitlement using (spot_id)

)

select * from final

</p> </details>

<details> <summary>Compiled dbt model SQL (from source above using dbt-compiler)</summary> <p>

-- filename: sl_web_spot_pages.sql



with  __dbt__CTE__sl_web_user_entitlement_base as (
-- filename: sl_web_user_entitlement_base.sql



/*
Rows in this model are:
    - Individual hits of Segment identify() calls on web or native.

This model can be used to map user entitlement to pages or segment events. This
model does NOT summarize the "subscription window(s)" for a given user_id and 
subscription. Further modeling and transformations are required to extract 
subscription window summaries.
*/

with final as (

    select distinct  -- Make sure duplicate rows are not included
        user_id,
        subscription_entitlement,
        received_at as entitlement_from,
        coalesce(lead(received_at, 1) over(partition by user_id order by received_at),'3001-01-01') as entitlement_to
                
    from surfline.identifies

    where subscription_entitlement is not null

)

select * from final
),  __dbt__CTE__base_web_pages as (
--filename: base_web_pages.sql



with final as (

    select
        received_at,
        id as page_id,
        spot_id,
        has_cam,
        user_id,
        anonymous_id,
        name,
        -- fields below are in production as of: 2020-01-31 (1st full day of data)
        cam_id,
        is_single_cam,
        is_mobile_view,
        cam_name_primary

    from surfline.pages

)

select * from final
),user_entitlement as (

    select * from __dbt__CTE__sl_web_user_entitlement_base

),

has_cam_entitlement as (

    select * from "segment"."greg_clunies"."spots_has_cam_map"

),

pages as (

    select * from __dbt__CTE__base_web_pages

),

pages_xf as (

    select
        pages.received_at,
        pages.page_id,
        pages.spot_id,
        pages.user_id,
        pages.anonymous_id,
        pages.cam_id,
        pages.is_single_cam,
        pages.is_mobile_view,
        pages.cam_name_primary,
        user_entitlement.subscription_entitlement

    from pages
    left join user_entitlement
        on pages.user_id = user_entitlement.user_id
            and pages.received_at >= user_entitlement.entitlement_from
            and pages.received_at < user_entitlement.entitlement_to
    where pages.spot_id is not null
        and pages.name = 'Spot Report'

        -- Remove multicam favorites page hits for now due to varying # of cams on page (user selected)
        -- and pages.context_page_url != 'https://www.surfline.com/surf-cams'
        /*
        2019-12-28
        There are only 13 records of the above "&" scenario - which is immaterial.
        This might not be worth the compute time.

        - Julian
        */

    

        -- Only get new data (not including today's data)
        and pages.received_at > (select max (received_at) from "segment"."greg_clunies"."sl_web_spot_pages" )
        and pages.received_at < current_date

    

),

final as (

    select
        pages_xf.received_at,
        pages_xf.page_id,
        pages_xf.spot_id,
        has_cam_entitlement.has_cam,
        pages_xf.user_id,
        pages_xf.anonymous_id,
        pages_xf.cam_id,
        pages_xf.is_single_cam,
        pages_xf.is_mobile_view,
        pages_xf.cam_name_primary,
        pages_xf.subscription_entitlement

    from pages_xf
    -- inner join because some spot_ids not in MongoDB
    inner join has_cam_entitlement using (spot_id)

)

select * from final

</p> </details>

<details> <summary>"Fixed" dbt model SQL (results in inconsistent casing)</summary> <p>

-- filename: sl_web_spot_pages.sql

{{
	config(
        materialized = 'incremental',
        sort = 'received_at',
        unique_key = 'page_id'
	)
}}

with user_entitlement as (

    select * from {{ ref('sl_web_user_entitlement_base') }}

),

HAS_CAM_ENTITLEMENT as (

    select * from {{ ref('spots_has_cam_map') }}

),

PAGES as (

    select * from {{ ref('base_web_pages') }}

),

PAGES_XF as (

    select
        PAGES.RECEIVED_AT,
        PAGES.PAGE_ID,
        PAGES.SPOT_ID,
        PAGES.USER_ID,
        PAGES.ANONYMOUS_ID,
        PAGES.CAM_ID,
        PAGES.IS_SINGLE_CAM,
        PAGES.IS_MOBILE_VIEW,
        PAGES.CAM_NAME_PRIMARY,
        USER_ENTITLEMENT.SUBSCRIPTION_ENTITLEMENT

    from PAGES
    left join USER_ENTITLEMENT
        on PAGES.USER_ID = USER_ENTITLEMENT.USER_ID
            and PAGES.RECEIVED_AT >= USER_ENTITLEMENT.ENTITLEMENT_FROM
            and PAGES.RECEIVED_AT < USER_ENTITLEMENT.ENTITLEMENT_TO
    where PAGES.SPOT_ID is not null
        and PAGES.NAME = 'Spot Report'

        -- Remove multicam favorites page hits for now due to varying # of cams on page (user selected)
        -- and pages.context_page_url != 'https://www.surfline.com/surf-cams'
        /*
        2019-12-28
        There are only 13 records of the above "&" scenario - which is immaterial.
        This might not be worth the compute time.

        - Julian
        */

    {% if is_incremental() %}

        -- Only get new data (not including today's data)
        and PAGES.RECEIVED_AT > (select max(RECEIVED_AT) from {{ this }} )
        and PAGES.RECEIVED_AT < current_date

    {% else %}

        and pages.received_at >= date_trunc('year', current_date - 1) - interval '1 year'

    {% endif %}

),

FINAL as (

    select
        PAGES_XF.RECEIVED_AT,
        PAGES_XF.PAGE_ID,
        PAGES_XF.SPOT_ID,
        HAS_CAM_ENTITLEMENT.HAS_CAM,
        PAGES_XF.USER_ID,
        PAGES_XF.ANONYMOUS_ID,
        PAGES_XF.CAM_ID,
        PAGES_XF.IS_SINGLE_CAM,
        PAGES_XF.IS_MOBILE_VIEW,
        PAGES_XF.CAM_NAME_PRIMARY,
        PAGES_XF.SUBSCRIPTION_ENTITLEMENT

    from PAGES_XF
    -- inner join because some spot_ids not in MongoDB
    inner join HAS_CAM_ENTITLEMENT using (SPOT_ID)

)

select * from FINAL

</p> </details>

Configuration

<details> <summary>SQLFluff configuration</summary> <p>

# For SQLFluff Rules reference, see:
# https://docs.sqlfluff.com/en/stable/rules.html#rules-reference
[sqlfluff]
verbose = 0
nocolor = False
dialect = postgres
templater = dbt
rules = None
exclude_rules = L032,L033,L034,L037,L044
recurse = 0
output_line_length = 120
runaway_limit = 10
ignore = parsing
ignore_templated_areas = True

[sqlfluff:indentation]
indented_joins = False
template_blocks_indent = True

# Some rules can be configured directly from the config common to other rules.
[sqlfluff:rules]
tab_space_size = 4
max_line_length = 120
indent_unit = space
comma_style = trailing
allow_scalar = True
single_table_references = consistent
only_aliases = True

# Some rules have their own specific config.
# All SQLFluff rules can be found at: https://docs.sqlfluff.com/en/stable/rules.html#rules-reference
# When a rule in not listed below, we inherit the default behavior from above.
[sqlfluff:rules:L003]
lint_templated_tokens = True

[sqlfluff:rules:L010]  # Keywords
capitalisation_policy = lower

[sqlfluff:rules:L014]  # Unquoted identifiers
capitalisation_policy = lower
unquoted_identifiers_policy = all

[sqlfluff:rules:L016]
# Setting to True allows us to copy/paste long URLs as comments
ignore_comment_lines = True

[sqlfluff:rules:L030]  # Function names
capitalisation_policy = lower

[sqlfluff:rules:L038]
select_clause_trailing_comma = forbid

[sqlfluff:rules:L040]  # Null & Boolean Literals
capitalisation_policy = lower

[sqlfluff:rules:L042]
# By default, allow subqueries in from clauses, but not join clauses.
forbid_subquery_in = join

</p> </details>

closed time in 6 hours

GClunies

issue commentsqlfluff/sqlfluff

L014 linting and fixing "Inconsistent capitalisation of unquoted identifiers" that do not exist

I'm closing this issue. The PR has not been merged, but that PR adds error reporting when the configuration is not set up correctly, and there is a solution without this. (It's still a good change, though!)

GClunies

comment created time in 6 hours

issue closedsqlfluff/sqlfluff

BigQuery dialect: Unable to lex characters: '&'

Expected Behaviour

The BigQuery dialect should be able to lex and parse bitwise operators, such as &.

Observed Behaviour / Steps to Reproduce

bitwise.sql:

select features & (1 << 1) = (1 << 1) as is_internal from whatever

sqlfluff parse --dialect bigquery bitwise.sql:

[L:  1, P:  1]      |file:
[L:  1, P:  1]      |    statement:
[L:  1, P:  1]      |        select_statement:
[L:  1, P:  1]      |            select_clause:
[L:  1, P:  1]      |                keyword:                                      'select'
[L:  1, P:  7]      |                [META] indent:
[L:  1, P:  7]      |                whitespace:                                   ' '
[L:  1, P:  8]      |                select_clause_element:
[L:  1, P:  8]      |                    column_reference:
[L:  1, P:  8]      |                        identifier:                           'features'
[L:  1, P: 16]      |                    unparsable:                               !! Expected: 'Nothing...'
[L:  1, P: 16]      |                        whitespace:                           ' '
[L:  1, P: 17]      |                        unlexable:                            '&'
[L:  1, P: 18]      |                        whitespace:                           ' '
[L:  1, P: 19]      |                        start_bracket:                        '('
[L:  1, P: 20]      |                        raw:                                  '1'
[L:  1, P: 21]      |                        whitespace:                           ' '
[L:  1, P: 22]      |                        raw:                                  '<'
[L:  1, P: 23]      |                        raw:                                  '<'
[L:  1, P: 24]      |                        whitespace:                           ' '
[L:  1, P: 25]      |                        raw:                                  '1'
[L:  1, P: 26]      |                        end_bracket:                          ')'
[L:  1, P: 27]      |                        whitespace:                           ' '
[L:  1, P: 28]      |                        raw:                                  '='
[L:  1, P: 29]      |                        whitespace:                           ' '
[L:  1, P: 30]      |                        start_bracket:                        '('
[L:  1, P: 31]      |                        raw:                                  '1'
[L:  1, P: 32]      |                        whitespace:                           ' '
[L:  1, P: 33]      |                        raw:                                  '<'
[L:  1, P: 34]      |                        raw:                                  '<'
[L:  1, P: 35]      |                        whitespace:                           ' '
[L:  1, P: 36]      |                        raw:                                  '1'
[L:  1, P: 37]      |                        end_bracket:                          ')'
[L:  1, P: 38]      |                        whitespace:                           ' '
[L:  1, P: 39]      |                        raw:                                  'as'
[L:  1, P: 41]      |                        whitespace:                           ' '
[L:  1, P: 42]      |                        raw:                                  'is_internal'
[L:  1, P: 53]      |            whitespace:                                       ' '
[L:  1, P: 54]      |            [META] dedent:
[L:  1, P: 54]      |            from_clause:
[L:  1, P: 54]      |                keyword:                                      'from'
[L:  1, P: 58]      |                whitespace:                                   ' '
[L:  1, P: 59]      |                from_expression:
[L:  1, P: 59]      |                    [META] indent:
[L:  1, P: 59]      |                    from_expression_element:
[L:  1, P: 59]      |                        table_expression:
[L:  1, P: 59]      |                            table_reference:
[L:  1, P: 59]      |                                identifier:                   'whatever'
[L:  1, P: 67]      |                    [META] dedent:
[L:  1, P: 67]      |    newline:                                                  '\n'

==== parsing violations ====
L:   1 | P:  17 |  LXR | Unable to lex characters: '&'
L:   1 | P:  16 |  PRS | Found unparsable section: ' & (1 << 1) = (1 << 1) as is_internal'

Note that the same error exists using the ansi dialect, but I'm not sure if it should be supported for ANSI SQL.

Version

$ sqlfluff --version
sqlfluff, version 0.5.6

$ python3 --version
Python 3.8.10

Configuration

None.

closed time in 6 hours

jdub

issue commentsqlfluff/sqlfluff

BigQuery dialect: Unable to lex characters: '&'

Resolved by #1053

jdub

comment created time in 6 hours

push eventsqlfluff/sqlfluff

Barry Hart

commit sha b44d082070fe1f1746b1b8e06c9f20780f482e18

Issue 1051: Add support for binary operators

view details

Barry Hart

commit sha c170f3221cea7346cc13f21730169167a8701f03

Fix some lexing issues

view details

Barry Hart

commit sha 34032ad5d05a95ab3a0f2211bcfb1920b8c9294e

Move binary shift tokens before comparison operators in search order

view details

Barry Hart

commit sha 02c494c59827fe43753505a4fc89d4f0ed6ef3b5

Shift operators

view details

Barry Hart

commit sha bea1f6bbf44d258f7ec2797d9b36a7ef64f3b769

Rename Binary -> Bitwise

view details

Barry Hart

commit sha bfff2ff914c565820500d77fa465627c6f436c4c

Run Black, fix accidental renaming

view details

Barry Hart

commit sha cd2f90c447ef8c8cae7fa4b516e895bc5919e32c

Create explicit segment classes for the bitwise left and right shift operators

view details

Barry Hart

commit sha 730413150ab69d00f2222683bcf1513b57d6cffe

Merge pull request #1053 from barrywhart/bhart-issue_1051_binary_operators Issue 1051: Add support for binary operators

view details

push time in 6 hours