profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/gregdavill/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.
Gregory Davill gregdavill Australia

butterstick-fpga/butterstick-hardware 116

Basic ECP5 based GigE to SYZYGY interface.

gregdavill/advent-calendar-of-circuits-2020 112

1 circuit board design a day for 31 days.

gregdavill/d20-hardware 48

Hardware design files for the icosahedran d20 build.

gregdavill/ecpprog 46

Programmer for the Lattice ECP5 series, making use of FTDI based adaptors

gregdavill/bosonFrameGrabber 32

Simple but Small Frame Grabber

ADDVulcan/ADDVulcan 29

ADDVulcan satellite hacking solutions for for Hack-A-Sat 2020

gregdavill/ArcticKoala 24

Development board for Lattice Crosslink-NX 72QFN

gregdavill/foboot 22

Bootloader for Fomu

AmieDD/ADDVulcan 11

ADDVulcan 2020 Hack-A-Sat writeups

gregdavill/Buzzard 9

Software tools for EAGLE silkscreen generation

created tagbutterstick-fpga/syzygy-breakout-standard

tagr1.0

Basic SYZYGY pod breaking out to 0.1" proto area

created time in 11 hours

create barnchbutterstick-fpga/syzygy-breakout-standard

branch : main

created branch time in 11 hours

created repositorybutterstick-fpga/syzygy-breakout-standard

Basic SYZYGY pod breaking out to 0.1" proto area

created time in 11 hours

issue openedgregdavill/DiVA-firmware

Windows Update tool: Support custom firmware upload to device

Summary

The update software could also be used to upload custom firmware.

Description of feature

Currently the windows update tool is compiled with an embedded firmware. Adding some GUI elements should enable the ability to load a custom file.

Mockups

Screenshot 2021-09-15 091716 Screenshot 2021-09-15 092349

created time in 2 days

PullRequestReviewEvent

Pull request review commentenjoy-digital/litex

{Dep,P}acketizer: properly handle last_be wraparound

 def loopback_test(self, dw):             header["field_32b"]  = prng.randrange(2**32)             header["field_64b"]  = prng.randrange(2**64)             header["field_128b"] = prng.randrange(2**128)-            datas = [prng.randrange(2**dw) for _ in range(prng.randrange(2**7))]-            packets.append(Packet(header, datas))--        def generator(dut, valid_rand=50):-            # Send packets-            for packet in packets:-                yield dut.sink.field_8b.eq(packet.header["field_8b"])-                yield dut.sink.field_16b.eq(packet.header["field_16b"])-                yield dut.sink.field_32b.eq(packet.header["field_32b"])-                yield dut.sink.field_64b.eq(packet.header["field_64b"])-                yield dut.sink.field_128b.eq(packet.header["field_128b"])-                yield-                for n, data in enumerate(packet.datas):-                    yield dut.sink.valid.eq(1)-                    yield dut.sink.last.eq(n == (len(packet.datas) - 1))-                    yield dut.sink.data.eq(data)-                    yield-                    while (yield dut.sink.ready) == 0:-                        yield-                    yield dut.sink.valid.eq(0)-                    yield dut.sink.last.eq(0)-                    while prng.randrange(100) < valid_rand:-                        yield--        def checker(dut, ready_rand=50):-            dut.header_errors = 0-            dut.data_errors   = 0-            dut.last_errors   = 0-            # Receive and check packets-            for packet in packets:-                for n, data in enumerate(packet.datas):-                    yield dut.source.ready.eq(0)-                    yield-                    while (yield dut.source.valid) == 0:-                        yield-                    while prng.randrange(100) < ready_rand:-                        yield-                    yield dut.source.ready.eq(1)-                    yield-                    for field in ["field_8b", "field_16b", "field_32b", "field_64b", "field_128b"]:-                        if (yield getattr(dut.source, field)) != packet.header[field]:-                            dut.header_errors += 1-                    #print("{:x} vs {:x}".format((yield dut.source.data), data))-                    if ((yield dut.source.data) != data):-                        dut.data_errors += 1-                    if ((yield dut.source.last) != (n == (len(packet.datas) - 1))):-                        dut.last_errors += 1-            yield+            datas = [prng.randrange(2**8) for _ in range(prng.randrange(dw - 1) + 1)]+            packets.append(StreamPacket(datas, header))          class DUT(Module):             def __init__(self):-                packetizer   = Packetizer(packet_description(dw), raw_description(dw), packet_header)-                depacketizer = Depacketizer(raw_description(dw), packet_description(dw), packet_header)-                self.submodules += packetizer, depacketizer-                self.comb += packetizer.source.connect(depacketizer.sink)-                self.sink, self.source = packetizer.sink, depacketizer.source+                self.submodules.packetizer = Packetizer(+                    packet_description(dw),+                    raw_description(dw),+                    packet_header,+                )+                self.submodules.depacketizer = Depacketizer(+                    raw_description(dw),+                    packet_description(dw),+                    packet_header,+                )+                self.comb += self.packetizer.source.connect(self.depacketizer.sink)+                self.sink, self.source = self.packetizer.sink, self.depacketizer.source          dut = DUT()-        run_simulation(dut, [generator(dut), checker(dut)])-        self.assertEqual(dut.header_errors, 0)-        self.assertEqual(dut.data_errors,   0)-        self.assertEqual(dut.last_errors,   0)+        recvd_packets = []+        run_simulation(+            dut,+            [+                stream_inserter(+                    dut.sink,+                    src=packets,+                    seed=seed,+                    debug_print=debug_print,+                    valid_rand=50,+                ),+                stream_collector(+                    dut.source,+                    dest=recvd_packets,+                    expect_npackets=npackets,+                    seed=seed,+                    debug_print=debug_print,+                    ready_rand=50,+                ),+            ],+            vcd_name="{}.vcd".format(with_last_be),+        )++        # When we don't have a last_be signal, the Packetizer will+        # simply throw away the partial bus word. The Depacketizer+        # will then fill up these values with garbage again. Thus we+        # also have to remove the proper amount of bytes from the sent+        # packets so the comparison will work.+        if not with_last_be and dw != 8:+            for (packet, recvd_packet) in zip(packets, recvd_packets):+                invalid_recvd_bytes = packet_header_length % (dw // 8)+                recvd_packet.data = recvd_packet.data[:-invalid_recvd_bytes]+                packet.data = packet.data[:len(recvd_packet.data)]++        self.assertTrue(compare_packets(packets, recvd_packets))

Just an idea,. Instead of trying to guess/fake the last_be signal, could this generate a warning/error if the last_be signal isn't present? That way the user can determine the behavior they want?

lschuermann

comment created time in 11 days

pull request commentenjoy-digital/litex

{Dep,P}acketizer: properly handle last_be wraparound

@gregdavill awesome, thanks! You wouldn't believe me how many ~hours~ days of staring at GTKWave this took :smile:. In the version you've tested there still was a bug where the Packetizer/Depacketizer didn't handle transactions of one (partial) bus word, but that presumably doesn't occur with ARP/ICMP/your UDP protocol.

That's correct. I'm merging a header and payload that both aren't fully aligned to 32bits. I've been able to construct my data such that I avoid a lot of the more nuanced edge cases, like a header/payload less than dw.

lschuermann

comment created time in 12 days

push eventgregdavill/KiBuzzard

x70b1

commit sha 000e86e86035288a8aad4c14de12d3e1bc3788e5

add AUR/kicad-kibuzzard-git

view details

Gregory Davill

commit sha 22e7358f7d5eb4d802700334988d2c2f68ad7869

Merge pull request #56 from x70b1/patch-1 add AUR/kicad-kibuzzard-git

view details

push time in 12 days

PR merged gregdavill/KiBuzzard

add AUR/kicad-kibuzzard-git

I packaged the plugin for Arch users.

If you would like to add a hint, here is my PR for this.

+2 -0

1 comment

1 changed file

x70b1

pr closed time in 12 days

pull request commentgregdavill/KiBuzzard

add AUR/kicad-kibuzzard-git

Thanks!

x70b1

comment created time in 12 days

pull request commentenjoy-digital/litex

{Dep,P}acketizer: properly handle last_be wraparound

Just some user feedback, this is working nicely for me on ECP5 hardware.

I've got a HyperRAM > DMA > FIFO > Packetizer > LiteEth Hybrid MAC pipeline all running at 32bit dw. Where the Paketizer is responsible for forming UDP packets in hardware.

Thanks!

lschuermann

comment created time in 12 days

push eventorangecrab-fpga/orangecrab-hardware

Greg Davill

commit sha e1ab005d2380e4a9b7e615e689bc3c9cff3745b0

doc: Add battery info fixes #42

view details

push time in 18 days

issue closedorangecrab-fpga/orangecrab-hardware

doc: Specify battery connector voltage

Current info on the README doesn't specify that the battery connector is for 1S LiPo cell.

closed time in 18 days

gregdavill

issue openedorangecrab-fpga/orangecrab-hardware

doc: Specify battery connector voltage

Current info on the README doesn't specify that the battery connector is for 1S LiPo cell.

created time in 18 days

push eventbutterstick-fpga/butterstick-hardware

Greg Davill

commit sha 59068fa24e036840685a42cc50f7846bc8ad5c02

hw.r1d0: Add ECN_001

view details

push time in 25 days

issue commentorangecrab-fpga/orangecrab-hardware

Lattice Diamond Compatibility

Downloading over DFU should work, but I've not tried it. Can you use the .bit file with dfu-util.

If this does'nt work let me know, So I can debug this further.

dramoz

comment created time in a month

issue commentorangecrab-fpga/orangecrab-examples

CircuitPython isn't exporting board?

No I don't think you're missing anything.

I'm also not much of a circuit python user, so I'd not progressed much further down the path for porting this successfully.

tommythorn

comment created time in a month

issue openedenjoy-digital/litex

ecp5: Memory pattern of Async FIFOs not mapping to BRAM

I've found an interesting bug recently with one of my designs. It's fundamentally related to the migen.fhdl.special emitting verilog code for memory blocks, but starts back in LiteX. Basically after a commit in May 2021 Async FIFOs created by LiteX may not be inferred to BRAM.

This is an intentional change on the Yosys front: https://github.com/YosysHQ/yosys/issues/2965

This issue is mostly so others using LiteX have a workaround until this could be fixed in migen.

Example

Consider a large Async FIFO in LiteX

self.submodules.fifo = ClockDomainsRenamer({"read": "video","write": "sys"})(stream.AsyncFIFO([('data', 24)], 1024))

This becomes the following in the generated verilog

reg [25:0] storage_4[0:1023];
reg [9:0] memadr_10;
reg [9:0] memadr_11;
always @(posedge sys_clk) begin
	if (soc_asyncfifo_wrport_we)
		storage_4[soc_asyncfifo_wrport_adr] <= soc_asyncfifo_wrport_dat_w;
	memadr_10 <= soc_asyncfifo_wrport_adr;
end

always @(posedge video_clk) begin
	memadr_11 <= soc_asyncfifo_rdport_adr;
end

assign soc_asyncfifo_wrport_dat_r = storage_4[memadr_10];
assign soc_asyncfifo_rdport_dat_r = storage_4[memadr_11];

The pattern of having two clock domains, and registering memadr_11 on video_clk and the async read is what is causing issues.

It's been recommended (https://github.com/YosysHQ/yosys/issues/2965) to switch this pattern to the following, note the reads are happening synchronously now.

reg [25:0] storage_4[0:1023];
reg [25:0] memdat_15;
reg [25:0] memdat_16;
always @(posedge sys_clk) begin
	if (soc_asyncfifo_wrport_we)
		storage_4[soc_asyncfifo_wrport_adr] <= soc_asyncfifo_wrport_dat_w;
	memdat_15 <= storage_4[soc_asyncfifo_wrport_adr];
end

always @(posedge video_clk) begin
	memdat_16 <= storage_4[soc_asyncfifo_rdport_adr];
end

assign soc_asyncfifo_wrport_dat_r = memdat_15;
assign soc_asyncfifo_rdport_dat_r = memdat_16;

Workaround

This is a small patch for migen.fhdl.special is working for me, but might be a bit too broad in it's selection for changing patterns, since as you can see from the above example it's also switching the sys_clk.read_port to a READ_FIRST mode.

diff --git a/migen/fhdl/specials.py b/migen/fhdl/specials.py
index 9344087..d1e3c78 100644
--- a/migen/fhdl/specials.py
+++ b/migen/fhdl/specials.py
@@ -330,6 +330,12 @@ class Memory(Special):
 
         adr_regs = {}
         data_regs = {}
+
+        clocks = [port.clock for port in memory.ports]
+        if clocks.count(clocks[0]) != len(clocks):
+            for port in memory.ports:
+                port.mode = READ_FIRST
+
         for port in memory.ports:
             if not port.async_read:
                 if port.mode == WRITE_FIRST:

created time in a month

issue closedYosysHQ/yosys

ecp5: LiteX dual clock memory may fail to map to DP16KD

Steps to reproduce the issue

Consider this example where we read/write with different clocks. This is the pattern used by LiteX. Since 1eea06bcc0750de02a460f3e949df2f68f800382 it does not correctly map to DP16KD's if the output does not end up registered.

Run the following verilog module through synth_ecp5

yosys -p "read_verilog test.v" -p "synth_ecp5"
module memtest(clk_a, clk_b, wr, wr_addr, wr_value, rd_addr, rd_value);

input clk_a, clk_b, wr;
input [8:0] wr_addr, rd_addr;
input [15:0] wr_value;
output wire [15:0] rd_value;

reg [15:0] mem0 [0:512];
reg [8:0] rd_addr_r;

always @(posedge clk_a) begin
    if (wr)
        mem0[wr_addr] <= wr_value;
end

always @(posedge clk_b)
    rd_addr_r <= rd_addr;

assign rd_value = mem0[rd_addr_r];

endmodule

Expected behavior

Before 1eea06bcc0750de02a460f3e949df2f68f800382 I got this result

   Number of cells:                  1
     DP16KD                          1

Actual behavior

After The mentioned commit above, and in the current git master I get the following

   Number of cells:               3344
     L6MUX21                       561
     LUT4                         1781
     PFUMX                         865
     TRELLIS_DPR16X4               128
     TRELLIS_FF                      9

In the output from yosys it appears to not pick up the output clock domain \clk_b.

2.26. Executing MEMORY_BRAM pass (mapping $mem cells to block memories).
Processing memtest.mem0:
  Properties: ports=2 bits=8208 rports=1 wports=1 dbits=16 abits=10 words=513
  Checking rule #1 for bram type $__ECP5_PDPW16KD (variant 1):
    Bram geometry: abits=9 dbits=36 wports=0 rports=0
    Estimated number of duplicates for more read ports: dups=1
    Metrics for $__ECP5_PDPW16KD: awaste=511 dwaste=20 bwaste=18416 waste=18416 efficiency=22
    Rule #1 for bram type $__ECP5_PDPW16KD (variant 1) accepted.
    Mapping to bram type $__ECP5_PDPW16KD (variant 1):
      Shuffle bit order to accommodate enable buckets of size 9..
      Results of bit order shuffling: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 -1 -1
      Write port #0 is in clock domain \clk_a.
        Mapped to bram port A1.
      Read port #0 is in clock domain !~async~.
        Bram port B1.1 has incompatible clock type.
        Failed to map read port #0.

closed time in a month

gregdavill

issue commentYosysHQ/yosys

ecp5: LiteX dual clock memory may fail to map to DP16KD

Okay good to know it was a deliberate change, rather than a regression.

You're right about the pattern. I think it's supposed to create a write first memory. But that doesn't make sense when you have an independent read clock.

I've been able to update my project to create the pattern you've described.

gregdavill

comment created time in a month

push eventgregdavill/DiVA-firmware

Greg Davill

commit sha e59b900f227e5318ac4424a9df72cc3a7e7bb6ce

ci: Update config

view details

push time in a month

push eventgregdavill/DiVA-firmware

Greg Davill

commit sha 50c366c9693cd8b6951287201495cab6ab3538ce

Update patch

view details

push time in a month

push eventgregdavill/DiVA-firmware

Greg Davill

commit sha 035fa532a498b58a127cbdb74dbe8513321f054f

Update deps

view details

Greg Davill

commit sha 9374cf322b1953f5caf8a958d67504e2f4d04de4

ci: Add patch file

view details

push time in a month

issue openedYosysHQ/yosys

ecp5: LiteX dual clock memory may fail to map to DP16KD

Steps to reproduce the issue

Consider this example where we read/write with different clocks. This is the pattern used by LiteX. Since 1eea06bcc0750de02a460f3e949df2f68f800382 it does not correctly map to DP16KD's if the output does not end up registered.

Run the following verilog module through synth_ecp5

yosys -p "read_verilog test.v" -p "synth_ecp5"
module memtest(clk_a, clk_b, wr, wr_addr, wr_value, rd_addr, rd_value);

input clk_a, clk_b, wr;
input [8:0] wr_addr, rd_addr;
input [15:0] wr_value;
output wire [15:0] rd_value;

reg [15:0] mem0 [0:512];
reg [8:0] rd_addr_r;

always @(posedge clk_a) begin
    if (wr)
        mem0[wr_addr] <= wr_value;
end

always @(posedge clk_b)
    rd_addr_r <= rd_addr;

assign rd_value = mem0[rd_addr_r];

endmodule

Expected behavior

Before 1eea06bcc0750de02a460f3e949df2f68f800382 I got this result

   Number of cells:                  1
     DP16KD                          1

Actual behavior

After The mentioned commit above, and in the current git master I get the following

   Number of cells:               3344
     L6MUX21                       561
     LUT4                         1781
     PFUMX                         865
     TRELLIS_DPR16X4               128
     TRELLIS_FF                      9

In the output from yosys it appears to not pick up the output clock domain \clk_b.

2.26. Executing MEMORY_BRAM pass (mapping $mem cells to block memories).
Processing memtest.mem0:
  Properties: ports=2 bits=8208 rports=1 wports=1 dbits=16 abits=10 words=513
  Checking rule #1 for bram type $__ECP5_PDPW16KD (variant 1):
    Bram geometry: abits=9 dbits=36 wports=0 rports=0
    Estimated number of duplicates for more read ports: dups=1
    Metrics for $__ECP5_PDPW16KD: awaste=511 dwaste=20 bwaste=18416 waste=18416 efficiency=22
    Rule #1 for bram type $__ECP5_PDPW16KD (variant 1) accepted.
    Mapping to bram type $__ECP5_PDPW16KD (variant 1):
      Shuffle bit order to accommodate enable buckets of size 9..
      Results of bit order shuffling: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 -1 -1
      Write port #0 is in clock domain \clk_a.
        Mapped to bram port A1.
      Read port #0 is in clock domain !~async~.
        Bram port B1.1 has incompatible clock type.
        Failed to map read port #0.

created time in a month

push eventgregdavill/DiVA-firmware

Greg Davill

commit sha 6e43cc748f99798b1f745e74759eea7d1538057c

Add platform name

view details

Greg Davill

commit sha a7eaeacf1790ddf092f2bf339cfcccf0e81f385c

Enable output of nextpnr JSON

view details

Greg Davill

commit sha 248682589cf146d49aab911aa54549a9b93414ed

Update font

view details

push time in a month

push eventgregdavill/DiVA-firmware

Greg Davill

commit sha 2957a3eceeafbb30841eb29c5c297a3b6137365e

Improve timing closure

view details

push time in a month

push eventgregdavill/DiVA-firmware

Greg Davill

commit sha b624059967a05945776ad80692cc69b606f4ddd7

Fix warning about widths in cdc_csr

view details

Greg Davill

commit sha 7e7330ee2d54a87e21ce276cd0405c319e151f8b

Add Scaler into the pipeline

view details

Greg Davill

commit sha cccc18a18d12cccc5614e2d5cd5f25f0b4a31ea1

Add extra info after build

view details

push time in a month

push eventgregdavill/DiVA-firmware

Greg Davill

commit sha 7a8a25964d755e5f5e1b5620dd11017669481e86

scripts: Update scaler coeff-gen

view details

Greg Davill

commit sha 2d67d8e5dd15c4a7863616979309700c3aeb74a2

nfc: whitespace adjust

view details

push time in a month

issue openedgregdavill/beth-firmware

NTP/PTP support

Feature request from Twitter:

Greg, I was wondering if your Boson GIG-E interface will support anything like NTP/PTP or some kind of time sync?

created time in a month