FrankYinXF/3DReconstruction 0

A MATLAB implementation of a full 3D Reconstruction pipeline using the functions provided by the Czech Technical University in Prague, Faculty of Electrical Engineering

FrankYinXF/Aboboo-Docs 0

Document for Aboboo (Free And Open Language Training Platform)

FrankYinXF/anki 0

Anki for desktop computers

FrankYinXF/awesome-c-cn 0

C 资源大全中文版,包括了:构建系统、编译器、数据库、加密、初中高的教程/指南、书籍、库等。

FrankYinXF/awesome-cpp 0

A curated list of awesome C/C++ frameworks, libraries, resources, and shiny things. Inspired by awesome-... stuff.

FrankYinXF/awesome-cpp-cn 0

C++ 资源大全中文版,标准库、Web应用框架、人工智能、数据库、图片处理、机器学习、日志、代码分析等

FrankYinXF/awesome-css-cn 0

CSS 资源大全中文版,内容包括:CSS预处理器、框架、CSS结构、代码风格指南、命名习惯等等

FrankYinXF/awesome-deep-vision 0

A curated list of deep learning resources for computer vision

FrankYinXF/awesome-machine-learning 0

A curated list of awesome Machine Learning frameworks, libraries and software.

FrankYinXF/awesome-machine-learning-cn 0


issue openedytdl-org/youtube-dl

No stdout streaming for reddit, writes to file instead (-.mp4)


###################################################################### WARNING! IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE ######################################################################



<!-- Carefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:

  • First of, make sure you are using the latest version of youtube-dl. Run youtube-dl --version and ensure your version is 2020.11.29. If it's not, see on how to update. Issues with outdated version will be REJECTED.

  • Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.

  • Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in

  • Search the bugtracker for similar issues: DO NOT post duplicates.

  • Read bugs section in FAQ:

  • Finally, put x into all relevant boxes (like this [x]) -->

  • [ ] I'm reporting a broken site support issue

  • [x] I've verified that I'm running youtube-dl version 2020.11.29

  • [x] I've checked that all provided URLs are alive and playable in a browser

  • [x] I've checked that all URLs and arguments with special characters are properly quoted or escaped

  • [x] I've searched the bugtracker for similar bug reports including closed ones

  • [x] I've read bugs section in FAQ

Verbose log

<!-- Provide the complete verbose output of youtube-dl that clearly demonstrates the problem. Add the -v flag to your command line you run youtube-dl with (youtube-dl -v <your command line>), copy the WHOLE output and insert it below. It should look similar to this: [debug] System config: [] [debug] User config: [] [debug] Command-line args: [u'-v', u''] [debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251 [debug] youtube-dl version 2020.11.29 [debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2 [debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4 [debug] Proxy map: {} <more lines> -->

youtube-dl -v   -o - > i.mp4
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: [u'-v', u'', u'-o', u'-']
[debug] Encodings: locale UTF-8, fs UTF-8, out None, pref UTF-8
[debug] youtube-dl version 2020.11.29
[debug] Python version 2.7.18 (CPython) - Linux-5.4.0-52-generic-x86_64-with-LinuxMint-20-ulyana
[debug] exe versions: ffmpeg 4.2.4, ffprobe 4.2.4, rtmpdump 2.4
[debug] Proxy map: {}
[RedditR] k3aoyn: Downloading JSON metadata
[Reddit] onbvmhlc17261: Downloading m3u8 information
[Reddit] onbvmhlc17261: Downloading MPD manifest
[debug] Default format spec: best/bestvideo+bestaudio
[debug] Invoking downloader on u''
[download] Destination: -.fdash-video_2329911
[download] 100% of 3.48MiB in 00:05
[debug] Invoking downloader on u''
[download] Destination: -.fdash-audio_0_132011
[download] 100% of 202.46KiB in 00:01
[ffmpeg] Merging formats into "-.mp4"
[debug] ffmpeg command line: ffmpeg -y -loglevel 'repeat+info' -i 'file:-.fdash-video_2329911' -i 'file:-.fdash-audio_0_132011' -c copy -map '0:v:0' -map '1:a:0' 'file:-.temp.mp4'
Deleting original file -.fdash-video_2329911 (pass -k to keep)
Deleting original file -.fdash-audio_0_132011 (pass -k to keep)


<!-- Provide an explanation of your issue in an arbitrary form. Please make sure the description is worded well enough to be understood, see Provide any additional information, suggested solution and as much context and examples as possible. If work on your issue requires account credentials please provide them or explain how one can obtain them. -->

-o - doesn't redirect to stdout for reddit but creates a file instead (-.mp4)

created time in a few seconds

pull request commentJuliaLang/julia

Add isemoji function to Unicode stdlib and export it

I think the application is counting emoji in text. You would do it by calling graphemes and then calling isemoji on each grapheme. It seems like this might be better suited to a package which can evolve its API as necessary.


comment created time in a minute

issue openedtensorflow/tensorflow

I am trying to convert yolov3 to a full uint8 tflite model after, i got this error : Quantized not yet supported for op : 'EXP'

Quantized not yet supported for op : 'EXP' Quantized not yet supported for op : 'EXP' Quantized not yet supported for op : 'EXP'

created time in 2 minutes


push eventtensorflow/tensorflow

Måns Nilsson

commit sha 7f79b013c30219b1db19540939d917a49b1c31ad

TFLu: Move Ethos-U custom op out of AllOpsResolver

view details

Måns Nilsson

commit sha b9e623dd7060f989f6cf38dbebf2d0138b43e0e9

Add ethosu.h based on review comment

view details

Måns Nilsson

commit sha a7af34e4039116800255fc6c7a76d0067297ab31

Fix bazel benchmark build

view details

Måns Nilsson

commit sha 477e3022a81aecbc90c8cf8c6e5e441eb5787806

Merge remote-tracking branch origin/upstream/master

view details

Advait Jain

commit sha 4a424e469ff4e80c123f903f97649f4c85ea3dea

Fix internal build and small cleanup. * removed from micro_ops target which was (correctly) resulting in a duplicate symbol linker error for an internal Python target. * cleanup some of the bazel build rules.

view details

Advait Jain

commit sha 6f8e49e90b48418a066013ae44c5615b217ed08e

Merge remote-tracking branch 'upstream/master' into ethosu

view details

TensorFlower Gardener

commit sha a78e2b1878e92f2f1687b1762971233016441585

Merge pull request #44968 from mansnils:ethosu PiperOrigin-RevId: 344864254 Change-Id: I2fc4d37bcc743122eed2045010e9035f8054311b

view details

push time in 4 minutes

issue commenttensorflow/tensorflow

Tensorflow 2.x can't feed data with non-uniform shapes to a multi-input multi-output model

I took a look at the code for Conv1DTranspose, and it doesn't currently support RaggedTensors -- i.e., the current implementation assumes that the inputs are dense. So I guess the feature request here is to have Conv1DTranspose (and possibly some other keras layers) be updated to support ragged tensors.

Also, in the second code example you posted, you need to specify ragged=True when creating the keras.Inputs. I.e.: input_1 = keras.Input(shape=(None,5), ragged=True). But that just gives you an error further along (which indicates that Conv1DTranspose doesn't support RaggedTensors).

Thank you for pointing out the error in my code! I'm not sure if TensorFlow has any other ways of handling time-series data with non-uniform shape, but if RaggedTensor is the only way, then you are correct that my feature request would be to have more layers be updated to support RaggedTensor

Also, as a way around, if I replace my Conv1DTranspose with other layers that support RaggedTensor, then is it fine to apply Conv1DTranspose afterwards? Or is it just not possible to use Conv1DTranspose anywhere in my model as long as the shape of my data is still non-uniform?

Lastly, do you happen to know what are the layer APIs that support RaggedTensor as input?


comment created time in 5 minutes

issue commenttensorflow/tensorflow

tensorflow.keras MirroredStrategy hangs without error message

Hi @WiestDaessle, to clarify, the training works okay when you run the tutorial? But it is hanging when you make these modifications? Is that correct? Can you please provide the exact code you are running into issues with so I can reproduce it on my end. You can provide it in a colab notebook, or just paste it a comment, but the full code (properly indented and with import statements) greatly helps us to troubleshoot. Also please clarify what you mean by 'the oversize'.

Note that if you are using MirroredStrategy and then you should not need to use strategy.experimental_distribute_dataset. Distributing the dataset this way is only required when using a custom training loop.


comment created time in 7 minutes

Pull request review commentytdl-org/youtube-dl

[nhk] Add support for NHK video programs

 import re  from .common import InfoExtractor+from ..utils import (+    ExtractorError,+)  -class NhkVodIE(InfoExtractor):-    _VALID_URL = r'https?://www3\.nhk\.or\.jp/nhkworld/(?P<lang>[a-z]{2})/ondemand/(?P<type>video|audio)/(?P<id>\d{7}|[^/]+?-\d{8}-\d+)'-    # Content available only for a limited period of time. Visit-    # for working samples.-    _TESTS = [{-        # clip-        'url': '',-        'md5': '256a1be14f48d960a7e61e2532d95ec3',-        'info_dict': {-            'id': 'a95j5iza',-            'ext': 'mp4',-            'title': "Dining with the Chef - Chef Saito's Family recipe: MENCHI-KATSU",-            'description': 'md5:5aee4a9f9d81c26281862382103b0ea5',-            'timestamp': 1565965194,-            'upload_date': '20190816',-        },-    }, {-        'url': '',-        'only_matching': True,-    }, {-        'url': '',-        'only_matching': True,-    }, {-        'url': '',-        'only_matching': True,-    }, {-        'url': '',-        'only_matching': True,-    }]-    _API_URL_TEMPLATE = ''+class NhkBaseInfoExtractor(InfoExtractor):+    _API_URL_TEMPLATE = '' -    def _real_extract(self, url):-        lang, m_type, episode_id = re.match(self._VALID_URL, url).groups()-        if episode_id.isdigit():-            episode_id = episode_id[:4] + '-' + episode_id[4:]+    def _get_clean_field(self, episode, key):+        return episode.get(key + '_clean') or episode.get(key) -        is_video = m_type == 'video'-        episode = self._download_json(+    def _list_episodes(self, lang, is_video, is_episode, m_id):+        return self._download_json(             self._API_URL_TEMPLATE % (                 'v' if is_video else 'r',-                'clip' if episode_id[:4] == '9999' else 'esd',-                episode_id, lang, '/all' if is_video else ''),-            episode_id, query={'apikey': 'EJfK8jdS57GqlupFgAfAAwr573q01y6k'})['data']['episodes'][0]+                'clip' if m_id[:4] == '9999' else 'esd',+                'episode' if is_episode else 'program',+                m_id, lang, '/all' if is_video else ''),+            m_id, query={'apikey': 'EJfK8jdS57GqlupFgAfAAwr573q01y6k'})['data']['episodes']++    def _parse_episode_json(self, episode, lang, is_video):         title = episode.get('sub_title_clean') or episode['sub_title'] -        def get_clean_field(key):-            return episode.get(key + '_clean') or episode.get(key)+        pgm_id = episode.get('pgm_id')+        pgm_no = episode.get('pgm_no')++        if not (pgm_id and pgm_no):+            raise ExtractorError('Cannot download episode: %s' % episode, expected=True)

expected is now false

no need to type expected=False, it's the default.

Oh I see what you mean about --dump-pages, it will print the API response. I guess that could mean I can remove the JSON from the string as well.



comment created time in 7 minutes

pull request commentJuliaLang/julia


@timholy: Still, you didn't address the first-order point: do we want to include weaknesses?

@StefanKarpinski: I do think we should list pros and cons of Julia in a straightforward manner and this seems like a reasonable place to do it.

So yes, I think we should.

@Volker-Weissmann: "I think that most C++ libraries are poorly tested". Are you serious? The whole world runs on C++.

Deadly. I've used a lot of C and C++ libraries in my life and most of them are only tested by letting people use them and report bugs. That's not what I would consider "well tested". Some C/C++ libraries have great test suites, but it is far from the norm.

@Volker-Weissmann: Julia is a niche programming language, that niche being scientific computing.

Well, that's like your opinion, man. Julia is a general purpose programming language and always has been. In any case, we don't need to agree on this, but we're not putting in the Julia manual that Julia is less mature than other languages just because it's more recent.

Anyway, consider e.g. NewvarNode. Using ? NewvarNode, you get

This is a purely internal non-exported type. This is like complaining Python doesn't document the _PyIO_get_module_state function.

combinatorics statprofilerhtml DifferentialEquations

Please do open issues on these packages about any issues with their documentation. In any case, pretending that this is a norm in the ecosystem and such does not help convince package maintainers to improve things, so we will not be including anything to that effect here.

Would it be possible to run Documenter.jl on every package on and host it on ? That would help a lot and this is what Rust does.

See JuliaHub: All registered Julia packages have docs there or, if they have opted out, are linked to from there and can be searched.

That's one beafy PC that you have.

No, it's a two-year-old MacBook Pro that wasn't even the fastest CPU when I bought it. I am, however, running Julia master, which is already considerably faster than Julia 1.5.


comment created time in 8 minutes

issue commentiBotPeaches/Apktool

StringIndexOutOfBoundsException (getHTML)

I've tried make Java "see the light" by using some global command-line options too:
<code>set "JAVA_TOOL_OPTIONS=-Dfile.encoding=UTF8 -Duser.language=en"</code>
that explicit the input files encoding, but without luck.

I.D.K. if you've pushed a fix yet,
but it still happens in even in a recent (November 30, 2020) build <a href=""><code>d63088</code></a> from master.

<pre> verbose ? [N] assets ? [Y] res ? [Y] smalli source ? [Y]

[INFO] java: D:\Software\Java\8\x64\jdk1.8.0_151\bin\java.exe [INFO] java arc.: -d64 [INFO] java opts: -Dfile.encoding=UTF8 -Duser.language=en -Xmn512m -Xms512m -Xmx2048m -Xverify:none -XX:+UseParallelGC -XX:ParallelGCThreads=2 -Dfile.encoding=UTF8 -Duser.language=en

[INFO] ApkTool.jar D:/DOS/android/bin/apktool.jar [INFO] extract-target: D:/DOS/android/כלכליסט [INFO] additional arg.s:

"D:\Software\Java\8\x64\jdk1.8.0_151\bin\java.exe" -d64 -jar "D:/DOS/android/bin/apktool.jar" --force --output "project_כלכליסט" decode "D:/DOS/android/כלכליסט"

Picked up JAVA_TOOL_OPTIONS: -Dfile.encoding=UTF8 -Duser.language=en -Xmn512m -Xms512m -Xmx2048m -Xverify:none -XX:+UseParallelGC -XX:ParallelGCThreads=2 -Dfile.encoding=UTF8 -Duser.language=en I: Using Apktool 2.4.2-d63088-SNAPSHOT on כלכליסט m.apk I: Loading resource table... Exception in thread "main" java.lang.StringIndexOutOfBoundsException: String index out of range: 24 at java.lang.String.substring( at brut.androlib.res.decoder.StringBlock.getHTML( at brut.androlib.res.decoder.ARSCDecoder.readValue( at brut.androlib.res.decoder.ARSCDecoder.readEntryData( at brut.androlib.res.decoder.ARSCDecoder.readTableType( at brut.androlib.res.decoder.ARSCDecoder.readTableTypeSpec( at brut.androlib.res.decoder.ARSCDecoder.readTablePackage( at brut.androlib.res.decoder.ARSCDecoder.readTableHeader( at brut.androlib.res.decoder.ARSCDecoder.decode( at brut.androlib.res.AndrolibResources.getResPackagesFromApk(

    at brut.androlib.res.AndrolibResources.loadMainPkg(
    at brut.androlib.res.AndrolibResources.getResTable(
    at brut.androlib.Androlib.getResTable(
    at brut.androlib.ApkDecoder.setTargetSdkVersion(
    at brut.androlib.ApkDecoder.decode(
    at brut.apktool.Main.cmdDecode(
    at brut.apktool.Main.main(


Perhaps modifying the following code to your usage can help with normalizing things for you with native functions?


comment created time in 9 minutes

issue commenttensorflow/tensorflow

2.4.0rc2 RTX3080 cuda 11.1 OP_REQUIRES failed at conv_ops_fused_impl.h:697 : Not found: No algorithm worked!

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you.


comment created time in 9 minutes

issue commentJuliaLang/julia

Precompiled files in system depot not being picked up

That's GPUCompiler-specific. and doesn't matter for regular precompilation files.


comment created time in 10 minutes

push eventtensorflow/tensorflow

Marat Dukhan

commit sha 06fb72020d3af8c2b27fd4f80993ff3f6b110d36

Fix bug in delegating partial subgraphs to XNNPACK Delegation of partial subgraphs with DENSIFY or FP16 DEQUANTIZE nodes could be broken due to using wrong index (index within the execution plan rather than a global node index) throughout the delegate. PiperOrigin-RevId: 344863679 Change-Id: Ifbdc9395ca8650fb2a8cafad8a5a1a23839c351a

view details

push time in 11 minutes

Pull request review commentJuliaLang/julia

REPL-only: Print a hint if the user types `exit` in the REPL

 function start_repl_server(port::Int)     end end +print_exit_hint(repl::BasicREPL, line) = print_exit_hint(repl.terminal, line)+print_exit_hint(repl::LineEditREPL, line) = print_exit_hint(repl.t, line)+print_exit_hint(repl::StreamREPL, line) = print_exit_hint(, line)
print_exit_hint(repl::AbstractREPL, line) = print_exit_hint(outstream(repl), line)

comment created time in 14 minutes

push eventmicrosoft/terminal


commit sha 2a79ba2fd3e6ef1d5934791c2de3513abe70cff5

Teach command palette to ignore case when sorting items (#8432) Closes #8430

view details

push time in 15 minutes

PR merged microsoft/terminal

Teach command palette to ignore case when sorting items Area-User Interface AutoMerge Issue-Bug Issue-Feature Product-Terminal

Closes #8430

+37 -1


3 changed files


pr closed time in 15 minutes

issue closedmicrosoft/terminal

Ignore case when sorting command palette

Description of the new feature/enhancement

I took me a while to understand why custom commands I add appear at the bottom. The reason is that when sorting commands in command palette we do it not lexicographically and as a result consider a case.

IMHO the case should not matter in the palette (or at least case-insensitive sorting should be optionally available).

Proposed technical implementation details (optional)

Upon comparing the filtered commands, try to set both command names to lower case and then compare lexicographically.

closed time in 15 minutes


push eventJuliaLang/julia

Simon Byrne

commit sha 217a449b8d5ee6a307d6907cf5ec100626fa58ab

Use mpq functions for Rational{BigInt} (#38520) Fixes #9826.

view details

push time in 17 minutes

delete branch JuliaLang/julia

delete branch : sb/mpq

delete time in 17 minutes

PR merged JuliaLang/julia

Use mpq functions for Rational{BigInt} bignums performance rationals

Fixes #9826.

A quick benchmark:

julia> X = [big(rand(1:10000)) // big(rand(1:10000)) for i = 1:100, j = 1:100];

julia> @time X*X; # before
 12.525924 seconds (45.29 M allocations: 1.267 GiB, 8.44% gc time)

julia> @time X*X; # after
  2.343117 seconds (18.95 M allocations: 733.016 MiB, 18.66% gc time, 17.26% compilation time)
+104 -0

0 comment

1 changed file


pr closed time in 17 minutes

issue closedJuliaLang/julia

Exploit mpq functions for Rational{BigInt}?

As discussed on julia-dev, there is some performance advantage to using the GMP mpq functions for Rational{BigInt} operations.

The easiest way to do this would be:

  • Make BigInt an immutable type. (It currently has immutable semantics anyway.) This way, Rational{BigInt} would be binary compatible with GMP's mpq_t (a pointer to an __mpq_struct consisting of the numerator __mpz_struct followed by the denominator), since our BigInt type is already a mirror of __mpz_struct.
  • Define specialized + etc. methods for Rational{BigInt} that call GMP mpq functions.

I get the impression that the main reason that BigInt is a type and not immutable is that this makes it easier to pass by reference to GMP functions. So changing this to immutable would benefit greatly from a better way to pass ccall "out" arguments by reference, as discussed in #2322.

Alternatively, if you want to leave BigInt as-is, then one needs an easy way to convert BigInt to/from an immutable version thereof, and this requires us to add an additional "internal" constructor function BigInt(alloc::Cint, size::Cint, d::Ptr{Limb}).

closed time in 17 minutes


issue commentJuliaLang/julia

slower than R when calculate `exp.(a)` where a is an `Array{Float64,1}` with many elements.

This version already vectorizes under certain conditions. Also, LoopVectorization provides a macro which makes this even more robust.

Thanks for your reply. I waste 2 hours and finally realized that SIMD instructions gain no performance here.


comment created time in 17 minutes

issue closedMATPOWER/matpower

Add MUMPS as linear solver option to mplinsol

MUMPS might be a good candidate for a new linear solver option in mplinsol. Benchmarking would be required, but its support for parallelism may make it a good alternative to PARDISO when solving large systems, but without the license restrictions.

It includes a MATLAB/Octave interface. While this doesn't work with MPI-based distributed memory parallelism, it does support shared memory, multithreaded parallelism through OpenMP and multithreaded BLAS implementations.

closed time in 17 minutes


issue commentMATPOWER/matpower

Add MUMPS as linear solver option to mplinsol

Moved to MATPOWER/mips#2, since that's the master repo for mplinsolve().


comment created time in 17 minutes

delete branch NREL/EnergyPlus

delete branch : global_dataDaylighting

delete time in 21 minutes

push eventNREL/EnergyPlus

Matt Mitchell

commit sha b4f704e0515b2aab0e28ee915b954aae4d7c8c13

moving DataConvergParams to state

view details

Matt Mitchell

commit sha 4c427cc0505bd9e35e85dd2eca405ba64237fa47

moving DataConversions to state

view details

Matt Mitchell

commit sha a54c18e3c78dd8f2bc93440b90424db3b1872305

moving DataDaylighting to state

view details

Matt Mitchell

commit sha 32aa7365a56be08c635030f8b3648181806284d1

Merge branch 'develop' into global_dataDaylighting

view details

Matt Mitchell

commit sha a795422e1a2225ef77e20025c02e83d9e770f82d

moving DataDaylightingDevices to state

view details

Matt Mitchell

commit sha 4bbc5273e1eb60171eb8884415f32e44ecb2fea1

moving DaylightingDevices to state

view details

Matt Mitchell

commit sha dd2a9c34c4e3101eb731e0c2a98fe3381f1b20e6

moving DaylightingManager to state

view details

Matt Mitchell

commit sha a7edb5b52a48def43fcc47108506b74af18b8bc1

moving DaylightingManager to state

view details

Matt Mitchell

commit sha 4c190204ad814d8da491eee9a1f8c6daf53f364c

moving DaylightingManager to state

view details

Matt Mitchell

commit sha 4e14332c90d6b1e1f5d7f2ca46288b0f90b69fe3

minor cleanups

view details

Matt Mitchell

commit sha b8c08c7a4a464f495a03c62213716114de6485a2

add back original int values to new LtgCtrlType enum

view details

Matt Mitchell

commit sha 2f2435928f518aac88e81377aa1724bf87d0d604

Merge branch 'develop' into global_dataDaylighting

view details

Matt Mitchell

commit sha 734af19f9a7fc87715a5078c529a4083d2dff7ce

Merge pull request #8393 from NREL/global_dataDaylighting Global Daylighting

view details

push time in 21 minutes

PR merged NREL/EnergyPlus

Global Daylighting Refactoring

Pull request overview

  • Moves DataConvergParams, DataConversions, DataDaylighting, DataDaylightingDevices, and DaylightingManager to state

Pull Request Author

Add to this list or remove from it as applicable. This is a simple templated set of guidelines.

  • [ ] Title of PR should be user-synopsis style (clearly understandable in a standalone changelog context)
  • [ ] Label the PR with at least one of: Defect, Refactoring, NewFeature, Performance, and/or DoNoPublish
  • [ ] Pull requests that impact EnergyPlus code must also include unit tests to cover enhancement or defect repair
  • [ ] Author should provide a "walkthrough" of relevant code changes using a GitHub code review comment process
  • [ ] If any diffs are expected, author must demonstrate they are justified using plots and descriptions
  • [ ] If changes fix a defect, the fix should be demonstrated in plots and descriptions
  • [ ] If any defect files are updated to a more recent version, upload new versions here or on DevSupport
  • [ ] If IDD requires transition, transition source, rules, ExpandObjects, and IDFs must be updated, and add IDDChange label
  • [ ] If structural output changes, add to output rules file and add OutputChange label
  • [ ] If adding/removing any LaTeX docs or figures, update that document's CMakeLists file dependencies


This will not be exhaustively relevant to every PR.

  • [ ] Perform a Code Review on GitHub
  • [ ] If branch is behind develop, merge develop and build locally to check for side effects of the merge
  • [ ] If defect, verify by running develop branch and reproducing defect, then running PR and reproducing fix
  • [ ] If feature, test running new feature, try creative ways to break it
  • [ ] CI status: all green or justified
  • [ ] Check that performance is not impacted (CI Linux results include performance check)
  • [ ] Run Unit Test(s) locally
  • [ ] Check any new function arguments for performance impacts
  • [ ] Verify IDF naming conventions and styles, memos and notes and defaults
  • [ ] If new idf included, locally check the err file and other outputs
+2503 -3171

1 comment

43 changed files


pr closed time in 21 minutes

pull request commentNREL/EnergyPlus

Global Daylighting

CI is OK here. Merging.


comment created time in 21 minutes

pull request commentmicrosoft/terminal

Make command palette ephemeral

I've got a couple questions and spelling nits. I've checked out the branch and played with it and it feels fine to me, so I'm okay with this. I just want to make sure the TODOs shouldn't be "TODONEs"

@zadjii-msft - thanks for the review! And sorry for all misspellings (I guess it is an hour of the day.. or of the night.. that I start to err the way that even spellchecker cannot stop me). I am not sure if we need to implement TODOs now.. as both are somewhat hypothetical.. I can absolutely work on them if you find it is a correct timing :blush:


comment created time in 21 minutes

pull request commentJuliaLang/julia

streamline a[begin] lowering via firstindex(a, n)

linuxaarch64 is the usual unrelated NaN == Inf error that we've been seeing recently on CI, so this should be good to merge once the other tests are green.


comment created time in 25 minutes