Thomas Hornschuh

Members
  • Content count

    23
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by Thomas Hornschuh

  1. I think the problem with Wings is similar to Arduino shields: They almost never fit. Either something needed is missing, or there is something unneeded which occupy I/O pins which may be needed for other purposes. I did some work with Arduinos but never bought a shield. I often buy small breakout boards for a single purpose, e.g. sensor boards, realtime clocks, etc. Thomas
  2. I can understand that. With boards like the Arty series the market for small hobby board vendors like you and Jack gets a lot tighter. The price for these boards is unbeatable and they are fully integrated into the Xilinx tools with board description files, etc. And they are almost always in stock. But I think it is also a big loss in diversty on the market. Albeit all boards use the same FPGAs the peripherals also matter. If I compare your boards with Jacks and maybe Xess, Numato, etc, they are all different with all their strengths and weaknesses. Maybe I should by a Pepino before it is completely sold out It is also really unique with 32Bit wide SRAM and all its I/O. It is the perfect Retro computing platform. And for the Oberon Fans (I don't know if there are many...) it will be sad. Thomas
  3. I don't think so, at least when you upgrade in the area what Xilinx call "cost sensitive portfolio", which means Artix and Spartan. Basically a larger FPGA is not more complex, it has just more macro cells. When you think about Retro computing when going from 8 to 16/32 Bit systems (like Amiga/Mac/Atari ST for all them also exist FPGA implementations) you can leverage the 20-50K "cells" devices. Of course devices like Kintex and Virtex have features which are of no use in this area and in best case a waste of money. Place and Route times increase quadratically with the size of the device, also the memory required to run this process, but this will only happen when your design really needs the area. Well I also do not really use them. As somebody mainly interested in processor design I also prefer to run my own soft cores. But the way Vivado integrates Microblaze is for me the "gold standard" on ease-of-use which I also aim for my own designs. You can just place in the Microblaze into IP integrator, drop in your peripherals, memory, etc., edit the address space map and run synthesis. Then jump to the SDK, it reads an automatically created hardware description file and knows what is on your chip and configures everything. If you have an ethernet core on the board, you can just use uIP to establish a TCP/IP connection. There are people out which are not interested in processors, they may do something else with the FPGA. They may have some cool IoT idea and just need a workhorse to connect their idea to the internet. And Microblaze is just this workhorse. My idea of course is that my Bonfire Processor https://github.com/bonfireprocessor/bonfire is this workhorse I'm now at least at the point that it is "plug" compatible with Microblaze (of course not Software compatible...). The part doing this is not yet on GitHub... And yes the half automated Vivado IP design requires more resources than its traditional manually crafted VHDL counterpart on the Papilio Pro without necessarily been more powerful. But productivity is much higher and I think the learning curve is less steep. Well I had a chance to discuss this with a Xilinx employee a while ago. FPGAs are different than microcontrollers. There are volume devices, like e.g. the Spartan, where they make the money with selling chips like most chip manufacturers. But there are a lot of customers using the really expensive super high-end FPGAs for things like prototyping and simulation of ASICs, science experiments (e.g. particle detectors). In this type of applications the money is earned with tools, support and professional services. Vivado licensing works in this way: For the volume chips you can use Webpack, for the high-end chips you have to pay a premium price (for the chip and the tools...) But I think FPGAs are inherently niche products which makes them expensive. There are uses case where they are without any competition, but these cases are rare. On volume applications they compete with ASICs, on low-volume and very-low volume applications they compete with Software. Retro-Computing is also such a niche. FPGAs compete with software emulators. Much more people use an emulator to run a retro computer, than an FPGA. Today's computers are so powerful that a Javacsript application in a browser can emulate a Macintosh faster than realtime
  4. How about Spartan-7? There are now available to. With the Arty-S7 Digilent has the first board with it. It it basically Artix-7 without high speed transceivers. I'm also seeing a lot of relatively cheap Zynq boards reaching the market. Also from smaller companies (e.g. the "Zynqberry" from Trenz Electronic in Germany). I have the feeling that Xilinx markets Zynq very aggressive and when I compare the price of Zynq boards with Artix boards I assume that the smaller Zynq devices (especially the Single Core Zynq-7007) are cheaper than Artix. On one side Zynq adds a lot of complexity to an FPGA board, on the other side it can help to create a more user friendly experience. I have not worked with Zynq on my own (but I'm sure I will not resist the temptation to buy one of those cheap Zynq boards very long...), but my understanding is that the Zynq processing system can configure the PL (the FPGA part of the Zynq). So uploading a design over e.g. network, could be possible. No more complicated JTAG stuff... One of the big advances of Vivado over ISE is that Xilinx has changed the licensing policy. The free Webpack is not limited by features anymore, it is limited by size and type of the device. The devices they don't support are mostly beyond the reach of a hobby user :-) So I have ILO (former Chipscope), full Microblaze+ SDK, full featured simulator and so on. The disadvantage is that classic HDL design is not well supported anymore (no vhi and test bench generation, harder to use syntax check, less informative synthesis messages, no bmm file support for non-Microblaze designs) and at least on my systems Vivado crashes more often. I now use both. ISE for basic HDL design and "unit testing", Vivado for integration and verification on hardware) For CI flows it may help that vivado can easier be used as command line tool with tcl scripts in "non-project" mode. I'm currently in the process of finding out a way to integrate vivado in a classic "make" project where I can say "make all" to compile the boot monitor, synthesize the bit stream, compile eLua and putting it all together into an MCS file which I can load onto Arty. Because I have also a full network stack in eLua I'm considering to allow Update over the network. It should be possible to write that the Soft SOC system write a new bitstream to address 0 of the flash and then restart the FPGA. But still more ideas than I can manage in my limited spare time BTW: Maybe you like to take a look in my just starting youtube channel: https://www.youtube.com/channel/UCkcJwz3oPKh60YKUc4zs6CQ Please excuse the async audio, all screen grabbing tools I tried under Linux are horrible (maybe because it is a VM) and my German accent Thomas
  5. Well, I think that people find FPGA development hard and difficult has is root cause that they don't understand the difference between hardware and software. Because HDLs borrow many syntactical constructs from programming languages and the code looks like a "program" (this is especially true for VHDL) people think in the wrong direction and get very frustrated. In the moment you realize that it describes hardware and think about expressions as a bunch of gates and signals set in a synchronous process as flip-flops it becomes easy. I personally can write and debug VHDL code as fast and productive as C code. Ok thinks get much harder in the moment you go off-chip. It is possible to write software without ever really understanding how it is really executed in a computer. I know developers who have difficulties to understand what is really the difference between an integer and a float or why it sometimes go wrong when you assign a short to a long :-) It is possible to write software with this limited level of understanding (in fact most JavaScript, php or python code is written by such people ), but it is not possible with Hardware. Even worse, without understanding the concepts it is even impossible to recognize the value of FPGA.
  6. Hi Jack, first of all I'm glad to hear that you continue your work. It sounds a bit ironic that you move away from desgin lab, while Xilinx Vivado goes in the opposite direction. The Vivado IP integrator is like "Designlab on steroids". It is just microblaze and AXI4 instead of zpuino and wishbone. I never tried designlab but recently bought an Arty board to port my Bonfire project (RISC-V on FPGA, see my forum post...) from Papilio Pro to Arty. At first I was disappointed from Vivado because it has less VHDL support then ISE. But I learned quickly that IP integrator steps in for this. You can integate any HDL as "RTL module" in IP integrator, and use the block design to wire the toplevel together. With this I had my processor running together with DDR3 RAM and Ethernet in two weeks. Of course this is far away from the open source idea, but with all the core synthesis work done by propietary Xilinx tool FPGA development is never true open source. But this is all more a side note. What personally I really like on the GadgetFactory is your hardware. I ordered a second Papilio Pro a few weeks ago, because it is the most "hacking friendly" FPGA board on the market (at least for Xilinx chips). It can be leveraged completely with comparable easy to understand open source HDL, e.g Hamsters SDRAM controller. Compare this with the Arty, the DDR3 RAM can only be managed practically with a Xilinx MIG which consumes 25% of the slices and increases synthesis times a lot. And for many usages patterns it is even slower then SDR SDRAM. An upgraded Papilio with a series 7 FPGA would be a dream, of course I have read about the difficulties of BGA... It would also open the path to Vivado, and Vivado could be helpful with your HDL library idea. Especially because Vivado IP cores are based on IP XACT. With FuseSOC https://github.com/olofk/fusesoc there is also an open source package manager supporting IP XACT. I have not used it, but I think it goes in the direction you are aiming for. I have doubts that additional tools like you mentioned in your roadmap really help, they make things more complicated then easier. Some, very subjective words on Arduino: While Arduino is a fantastic idea, the Arduino IDE itself is really crap. It is on the level of Turbo Pascal for CP/M in 1983. While this was a great idea in 1983, it is not in 2017. Regarding your idea with an AMI image: why not making the same with a virtual box VM? Cloud is fine, but quickly cost a lot of money:-) Thomas
  7. Hi all, over the last half year I have implemented a processor and surrounding SoC bringing the RISC-V ISA (http://riscv.org) to the Papilio Pro. It implements the 32Bit integer subset (RV32IM). The project is hosted on Gitub (https://github.com/bonfireprocessor). It still needs some additional documentation, cleanup and ready-to-run ISE projects to make it easy reproducable for others. But I post this link now, to find out if anybody is interested in my work. I will soon also post a bitstream here so anybody with access to a Papilio Pro can play with it. I have also ported eLua to it http://www.eluaproject.net @Jack: If you like I can also present the project in the GadgetFactory blog. Regards Thomas
  8. Hi all, sorry for the long delay since my last post. I was distracted by a few other things, in addition it took the German Telekom two weeks to get the upgrade of my internet connection to VDSL working. Finally I have now 50/10Mbit instead of 3/0.5Mbit , so it was worth the trouble. Attached to this post is a Bitstream with the working Bonfire SoC for the Papilio Pro. It boots into a monitor program, which allows some basic operation of the board. Connection speed is 500000 Baud per default. If this is a problem, I can also provide bitstreams with other default baud rates. It should print a message like this: Bonfire Boot Monitor 0.2d MIMPID: 0001000e MISA: 40001100 UART Divisor: 11 UART Revision 00000012 Uptime 0 sec SPI Flash JEDEC ID: 001720c2 The monitor supports the following commands: D <address>: dump memory, it will always dump 64 32Bit words, starting per default with address 1000000. Without entering a address the dump command will automatically dump the next 64 words X <load adr> <max size in hex>: Download a file with xmodem-crc protocol and load to <load adr>. Default load address is 100000. When no size is specified it will load the whole file in case it fits into the DRAM. Normally it is sufficent to just enter x without arguments. It has been tested to work with minicom under Linux G <address> jumps to <address> (default is again hex 1000000 when ommited, can be used to start a program downloaded with the X command E print xmodem error status. Shows the status of the last xmodem download. T test DRAM. Makes a simple (destructive) pattern test of the DRAM. When running the bitstream the first time it is best to use this command to check that everything is fine. B change baudrate. The user will be prompted for a the new baudrate. Every value between 300 and 500000 is allowed, no check further check is done, so it is possible to enter baudrates like 2423 :-) I re-display the boot message with some system info W: Write boot image. Writes the image downloaded with the X command to the flash ROM. It will write a 4KB header to flash offset 512KB, and then the image data directly behind it. The command can only be executed directly after a X command, because it will take the size of the downloaded file to determine the size of the image. In addition the X command that the heap "sbrk" address to the first free address after the downloaded code, this address is also written to the flash header. R: run boot image. Will run an image written with the W command. The second attachment is a compiled binary of my eLua (http://www.eluaproject.net/) implementation for RISC-V (source is on https://github.com/ThomasHornschuh/elua). To run it, download it with the X command into RAM and start with G (both commands works with their default parameter). To permanently add it to flash do the following Reset the Papilio Pro with the reset button (or reload the bistream, in case you don't like to program the bitstream to flash Download with the X command Write to Flash with the W command From now on you can start eLua after boot just with "R" command >r Reading Header ...OK Boot Image found, length 339968 Bytes, Break Address: 00063b00 ...OK Heap: 00062eb0 .. 007ef7ff eLua for Bonfire SoC 1.0a __virt_timer_period 1920000 eLua v0.9_bonfire_RV32IM-7-g7996f83 Copyright (C) 2007-2013 www.eluaproject.net eLua# You can enter help to get a command help... Tipp: From the eLua# promt run: lua /rom/life.lua for a demo of the game of live in Lua. It runs 50 iterations and prints then the runtime: ---------O---------------O------ ----------O--------------------- -----OO---------O-O------------- ----OO---------OO-OO------------ ---OO--O-OO-OOO---O-O----O------ --OO--OO-O---O------O---O-O----- ---O---OOO---O-----O----O-O----- ----OOO------O-O---------O------ -----OOO----OO-OO--------------- ------------O-O--------------OO- -------OO-------------------O--O ---------OO------------------OO- --------O--O-------------------- ---------O-O-------------O------ ----------O-------------O-O----- ------------------------O-O----- Life - generation 50, mem 22.8 kB Execution time 16.903 sec (16903.39) ms eLua# Enjoy and please give me feedback if you like it. Regards Thomas monitor.bit elua_lua_bonfire_papilio_pro.bin
  9. Good to hear, that it may not be worth the effort. My idea to ease the implementation was to switch from Wishbone "incrementing burst" to e.g. "Wrap-8" mode and just start with the offset of the access triggering the miss. So if for example the initial miss is at offset 4 the burst will be 4-5-6-7-0-1-2-3. The line offset counter would wrap-around automatically anyway. Nevertheless the hit determination would need additonal logic to determine validity for single words in the cache line. Indeed the RISC-V ISA spec exactly specifies this approach as simplest way of branch prediction. The code RISC-V gcc gnerates also seems to obey this rule. The RISC-V spec itself tries to be micro-architecture agnostic, but the code generator of a compiler of course cannot be. For example the code generator assumes that the processor has a barrel-shifter and shifts are cheap: Masking upper bits of a word (e.g converting int to char) is done with a shift left/shift right pair with the number of bits to shift. This was already a discussion on some of the RISC-V workshops/presentations. The RISC-V inventors at UCB focus mainly on designing a Linux-capable 64-Bit processor comparable with ARM Cortex-A series designs (without the "bloat" of course). In the community there are more designs which are focused more on Microcontroller class processors. One example is PicoRV. Thomas
  10. I think you mean lxp32_icache.vhd? Basically this is outdated. The original lxp32 design (which I use as base for Bonfire) has no real cache, it is more a 256Byte prefetch buffer. When used with large prefetch_size values it has a very negative impact on data access performance with single port RAMs like external SDRAM: It blocks the bus until prefetch is finished. I tried to solve this with monitoring the dbus_cyc and aborting the prefetch. It didn't have a noticeable effect. Finally I decided to build a real direct mapped cache https://github.com/bonfireprocessor/bonfire-cpu/blob/riscv/rtl/bonfire_dm_icache.vhd It still contains the dbus_cyc signal, but it is not used. Actually I like this cache because it is clean, easy to understand and only consumes 20 slices + RAM. It also has a few drawbacks: When the cache line to be accessed changes there is a one clock penalty because of the tag RAM access The tag RAM is only updated when the full cache line is read, therefore the cache miss latency is always the time for reading the full cache line The second topic is something I like to change at some time but it has no high priority yet. I think adding a data cache and a branch prediction will help more... Still the repo needs some cleanup, there are unused files and also I changed the name from wildfire to bonfire because I saw more potentially conflicting other users of the wildfire name compared to bonfire. But the old name is still used partly Thomas
  11. Hi Not if you use the embedded multipliers. Those are slow (never managed to get a 32x32 to work above ~105MHz or so). I hope with the 4 stage mutiplier https://github.com/bonfireprocessor/bonfire-cpu/blob/riscv/rtl/lxp32_mulsp6.vhd clock can be higher. Of course the mult instructions now take 4 clocks instead of 2. It also consumes less LUTs than the original design. Definitely:-) Data cache is the hard part compared to code. The boot monitor is in the bitfile, added with data2mem, currently I have 32Kb for it, the final version should be smaller. The 2nd stage in then loaded from a fixed address in flash to DRAM or downloaded with XModem. The second stage should implement a file system (e.g. SPIFFS). Currently my boot monitor has also flash write command. So initalisation of the flash is done with first xmodem download and then write to flash. It automatically writes the downloaded number of sectors to flash, and also a small header with information about the size. To simplify testing the boot loader implements a small subset of Linux type syscalls.It uses the same ABI as the RISC-V spike simulator (proxy kernel). So I can execute programs compiled for Spike
  12. Currently I'm using gcc. There is a LLVM port going on by Alex Bradburry from lowrisc (http://www.lowrisc.org/), I recenlty spoke with Alex at an event in Munich. They made great progress, LLVM is able to pass 90% of the gcc torture tests right now. They also in the process of upstreaming both the gcc and the llvm ports. The llvm port will now also support RV32, the preliminary port on the riscv.org website only supports RV64. My design currently qualifies for 100Mhz. I think I can quite easily reach about ~130Mhz, currently the limitations are more in some not so optimal code in the SoC (e.g. the 32KB BRAM for the bootloader is organized with 16*2K*32Bit blocks wich is not the best way to organize it, but it helps to run the same setup in simulation and on hardware quickly...). The whole system (with UART, SPI interface and DRAM controller) uses 60% of the LX9 slices, the CPU itself 743 slices. It can go down to less than 500 slices if the M extension (Mul and div) is removed and some of the privilege mode things (e.g. 64Bit cycle counters...). The RISC-V privilege mode is not very FPGA friendly, because the CSR registers are allocated in a spares 12Bit address space, consuming a lot of comparators and muxes to implement. Running completly in Block RAM I reach 0,67DMIPS/Mhz, in DRAM it reaches only 0,35DMIPS/Mhz. Main reason is that I don't have a data cache implemented yet, only instruction cache. I will upload the bitstream and the binaries for eLua and dhrystone soon so you can easily test it :-) Thomas
  13. Hi, I'm currently conneting an ESP8266 WiFi module to the Papilo Pro. I tried to find a specification about the max. current the 3,3V rail can deliver. According to the data sheet of the LTC3419 converter it can deliver 600mA. My measurements show that the ESP8266 draws around 190mA average current. I'm aware that also the USB port itself is limited, but this is not my question at the moment. I think the SPARTAN-& draws most of its current at the 1,2V rail, so the 3,3V rail must power the other chips (SDRAM, Flash, FTDI), so I think the addtional 200mA is ok. Thomas
  14. My concern is not so much the stability of the ESP8266 I just woud like to get some information (preferably from Jack) about the power budget left for wings on the 3,3V rail. According do the datasheet ESP draws about 180mA when sending (in 802.11b mode, in newer modes it is even less). My measurements confirm that. If there are very short peaks with more is possible, but this may influence stability but no damage the switch regulators.
  15. Hi, I have seen the pro is out of stock, and also can't be preordered. Will it be in produced again or is the product retired? Regards Thomas
  16. Are there still SDR-SDRAMs on the market? DDR will be difficult with a soft memory controller, and SRAM is small and expensive... I'm currently working on a project which implements th RISC-V ISA (see riscv.org) and runs a port of eLua to this architecture on a Papilio Pro. It is making great progress, I will post more information here in the forum in the next days. I'm already thinking about moving to another board (e.g. Pipistrello, Arty). But in some way I like the PaPro a lot: It is "publishing friendly" because I don't need a Xilinx CoreGen generated core to access the DRAM. With ISE a synthesis run for the LX9 usually takes only a few minutes including map, place and route, so it is possible to quick check design changes in Hardware As long a design fits in the LX9 it is really a convenient platform. And Xilinx has recently announced that they continue supporting the Spartan 6 series because of their success. Thomas
  17. Hi, I just try to find the Linux Download. When pressing Download button I get three files to select. Tried the zip. It contains a papilio loader that I can start with Linux. But it did not connect to the board. It is not in sync with the how-to: http://blog.gadgetfactory.net/2013/09/howto-papilio-loader-gui-on-linux/ There is no linux installer, sript in the download and also not any makefile in the papilio-prog directory. The fdti kernel drivers are loading, I can access the serial port of the board with a terminal program. Thomas
  18. Hi all, I have now published the project on GitHub: https://bitbucket.org/thornschuh/retro80
  19. Hi all, finally I managed to finalize my work in a way that I can upload it. The bit files are my extended version of SOCZ80 (I named my variant "RETRO80"). The "Serialboot" bitfile contains the orginal ROM Monitor from Will which interacts with the serial console (the baud rate is 115200 bit/sec fixed). The video RAM is initalized with a test pattern (with initalizing the Block RAM by VHDL code), so it is easy to check if VGA is working. The "consoleboot" bitfile contains a ROM Monitor which uses the VGA Port and the PS/2 "A" port of the arcade megawing as conole. Unfortunately the keyboard layout is german at the moment... Having a boot image is much more usefull. To get the boot image onto the system you can either use the method described in Wills readme.txt, or as fast alternative merge it at address 0x200000 to the bitfile and upload it. For merging bitfiles there are different tools, I used papillo-prog. The command is papilio-prog.exe -v -f retro80Serialboot20160604.bit -b ..\bscan_spi_xc6slx9.bit -a 200000:retro80_200.image Of course you must adjust the pathes to the structure on your system. My example was from a start directly in the directory where papilio-prog resides after intallationof the Papilio tools. When everything is loaded to the Papillio Pros flash chip and you restart the board (best with power cycling) you should see the boot monitor prompt either on your terminal emulator or the VGA display. Then you enter rread 200 200 rboot 200 Now CP/M is booted, the provided image will always boot CP/M on the serial console. On this console you can enter "MPMLDR", this will fire up MP/M II running console 0 on PS/2 / VGA, console 1 on serial port. The disk image contains a lot, e.g. Turbo Pascal and Wordstar. Please note that all this programs are patched for using the ADM3A screen sequences of the VGA console. So they will not work with the usual VT100 emulation of e.g. putty or TerraTerm. There is a turbovt.com which can be used with VT100. On user 6 there is an adapted versio of the startrek game. You can start it with: user 6 startrek If you want also the CP/M part work in VGA/PS/2 you can patch the disk image: First reset the SOC with the reset button on the Arcade MegaWing (this will only reset the CPU/board not reload the bitfile and will therefore not harm the DRAM contents) Boot CP/M Enter sysload bootvga.bin a: press "C" to continue (the sysload program is missing a message saying this ....) Reset the system again. rboot 200 should now boot to the VGA PS/2 console. To make the change permnent you need to enter rwrite 200 200 in the ROM Monitor. I will post a link to my bitbucket Repository soon. Regards Thomas retro80Serialboot20160604.bit retro80consoleboot20160620.bit retro80_200.image
  20. Hi Alan, I have already downloaded your fork a while ago to look into it. SD Card is one of the things on my list. I'm struggling a bit with the fact, that there are not enough free I/O pins anymore when using the Arcade MegaWing. There is exaclty one pin missing, I'm considering of just cutting one of the joystick pins - I'm personally not interested in Games CP/M 3.0 is much newer than MP/M II and one advantage is indeed that it can do the blocking/deblocking and there is no need for doing it in the BIOS. BTW I also did some closer look in the timing violations of SOCZ80, I think you mentioned it a while ago in this thread. With the Timing analysis tools in the ISE "PlanAhead" tool it is not that hard. I'm afraid the reason is a 14 level deep logic path in the T80 CPU which cannot be changed without completly redesigning the core. On the LX9 the T80 core simply cannot run faster then ~85 Mhz. That it works with 128Mhz simply shows that the Xilinx device seem to have a lot of overclocking margin. 85Mhz on the other hand is close to the clock range where Hamsters DRAM controller becomes unreliable. A "clean" design of SOCZ80 would need to work with DRAM clock separated from CPU clock. I have already take a look into the design of the cache, it should be not to difficult to separte the clock domains in the cache controller. Regards Thomas
  21. Hi Jack, thanks for your offer. Hope on the weekend I have time to upload everything. Thomas
  22. Hi all, last November I received my Papillio Pro. One of the projects I immediatly tried out was Will Sowerbutts SOCZ80. I really liked it, especially I started my computing experience with real CP/M computers in the beginning of the 80s. This included writing an own BIOS, etc. My old CP/M computer exists, but is not usable anymore because over the years all floppy disks get lost. So working with SOCZ80 was a exactly what I searched for. In the meantime I did a lot of extension to it: - Integration of a text mode video controller from open cores http://opencores.org/project,interface_vga80x40 - PS/2 keyboard - Adapted to use the Arcade Megawing for PS/2, VGA, GPIO LEDs, reset button - Extended CP/M 2.2 BIOS and MPM XIOS to support PS/2 and VGA. - ROM Monitor and boot loaders that can be used with VGA/PS/2 - so my extended version can be used as real stand-alone Computer when connected to power suplly, monitor and keyboard - The simulated terminal supports ADM3A/TVI950 escape sequences instead of VT100. They are easier/smaller to implement and many CP/M programs have difficulty to generate VT100 seqeunces. The disadvantage is that the local console is incompatible with the serial console, because I'm not aware of any Windows Terminal Emulator supporting ADM3A - The keyboard supports Wordstar compaitble Control characters for cursor keys. - For MP/M Console 0 is the Serial UART and Console 1 PS/2 / VGA. Other hardware / Software extensions: - Added support to Interrupt Mode 2 to the hardware and adapted MP/M XIOS to it (this allows the use of normal CP/M debuggers under MP/M without crashing the systems because the XIOS is no longer using RST7 for interrupts) - Fixed a bug in BIOS/XIOS with not taking into account that a CP/M DMA buffer can cross a 4K boundary. This occaisonally lead to data corruption e.g. when copying a file with CP/M PIP.COM Software: I also wrote a xmodem receive program in Turbo Pascal which makes transfering files a bit more easier. I collected a lot of old Software like Wordstar, Mutiplan, CBASIC, MBASIC, Turbo Pascal, Microsoft Fortran etc. and configured it to run with the Video Terminal. I created some example programs in Turbo Pascal, MBASIC, CBASIC and even Fortran (it was the first Fortran program in my life -really fun :-) ). I also collected things like Eliza and Startrek. I also adpated the MBASIC Startrek to compiled CBASIC. I sill plan to do more. E.g a improved video controller supporting higher resolutions, virtual console support for MP/M. My focus is currently on the MP/M implementation, because MP/M was beyond my reach in the 80s and I'm really fascinated by the elegance and power of this little multi-user/multi-tasking system. I support CP/M as bootloader and "maintenance mode" for MP/M, but currently have no plan to support CP/M 3.0 I'm currently in preparation to publish my extended SOCZ80 also with some disk images. Biggest limitation of the current implementation maybe that I only adapted it to German Keyboard Layout. This can be easily changed. My pre-anouncment here is mostly intended to find out if there is anybody out there interested in my work and also I'm searching for someone acting as tester to check if my distribution files (FPGA bitstream and disk images) are ok. Looking foward to any reply. Regards Thomas