Brace Computer Laboratory 2018 year end review

As the year comes to an end, I will like to recap what Brace Computer Laboratory was able to accomplish this year.

For first half of the year, most of my efforts were devoted to further developing OpenChrome graphics stack.  Probably the biggest accomplishment this year for OpenChrome DRM was finally figuring out why changing the screen resolution during run time causes X Server to crash.  I finally added device support for VIA Technologies VT1632(A) and Silicon Image SiI 164 DVI transmitters.  I discovered that code to properly set up display FIFO for CLE 266 and KM400 chipsets was missing, so I added the code to properly support CLE266 and KM400 chipsets.  With all of these efforts, OpenChrome DRM KMS code achieved rough feature parity with OpenChrome DDX UMS code.

However, during the second half of the year, the momentum slowed down considerably.  Although this happened during the first half of the year, the test code that would have formed the foundation of the code to replace the current GEM / TTM memory allocator code got lost along with ADATA Ultimate SU800 SSD 128 GB suddenly dying.  That effort got restarted, but the newer code, which is more complete, is still not finished for me to test the code.

Regarding OpenChrome DDX, I did not put in as much effort into developing it like I did during prior years, but made some progress in fixing several outstanding issues.  For example, I fixed Samsung NC20 netbook and VIA Embedded EPIA-M830 mainboard standby resume issues.  Fixed ECS VX900-I mainboard VGA screen distortion when an HDMI display is connected simultaneously.  Towards end of the year, I put in the initial effort into fixing VT (Virtual Terminal) display issue (i.e., blank screen) after the computer has gone through at least one standby resume.  Unfortunately, the code was not perfect.  Finally, I worked together with someone who wanted the VT issue solved yesterday, and appears to have fixed the issue (used HP 2133 Mini-Note for validation).  Now, you should be able to use VT even after standby resume (Previously, this only worked only when OpenChrome DRM is in use.).

Starting the second half of the year, I started getting into developing other graphics stacks other than OpenChrome.  I chose R128 (ATI Technologies RAGE 128) graphics stack.  I did do two minor releases (here and here), but the second one was merely to fix an issue introduced by the first release.  Spent some time trying to figure out how to fix standby resume issue and EXA rendering artifacts issue.  Unfortunately, made no progress in those areas this year.

Towards end of the year, I started to do minor maintenance work of cleaning up the old, neglected DDXs so that at least they do not give so many compilation warnings with newer X Servers.  I cleaned up all the code compilation warnings and released newer versions of DDX for Intel740, Number Nine Imagine 128, Matrox, NeoMagic, and Chips & Technologies.

Finally, I will like to appreciate all of those who supported me financially by donating to Brace Computer Laboratory.  I hope to continue improving and fixing underserved graphics device drivers next year and for many years to come.


Finally fixed OpenChrome DDX VT blank screen issue after standby resume

For some time, I have known that when OpenChrome DDX UMS (User Mode Setting) code goes through one or more standby resume (i.e., recovering from an ACPI S3 State), the computer will more or less likely completely lose the control of the VT (Virtual Terminal) screen.  It is highly system dependent and some models do not exhibit this behavior.  However, models that do not exhibit this behavior is fairly small.  The real answer as to why I could not deal with this was due to the lack of an idea on how to solve the issue.

Probably about several months ago, I started to think that if the DDX saved all display related registers during DDX initialization (i.e., X Server initialization), then I figured it should be able to restore perfectly all the time, including even after going through one standby resume cycle. So I made this commit in November that fixed the issue for some models (i.e., ones without an FP).

Two days ago, I got together with one actual OpenChrome user and this person pointed out the VT screen issue since the November commit.  After noticing that one particular register (CRA2 or 3X5.A2) was different between the code before the November commit and after, I was able to come up with the fix quickly.  VT now works fine before and after standby resume.  We both used HP 2133 Mini-Note for the validation (different OSes).

The problem I saw here was the flaws in the X Server RandR era callback functions.  It has no serious mechanism to handle a situation like this, and I had to come up with a “scheme” (or you can call this a “hack”) to deal with the matter.  I am probably the last person on Earth who will ever deal with X Server DDX UMS, but if there is going to be someone else who will deal with this, saving all the VGA registers during initialization “scheme” appears to do the trick for restoring VT.

xf86-video-neomagic Version 1.3 released

Here is the announcement.  I do not really have a functional laptop that has a NeoMagic graphics chip, so I have no idea if the code even works. You probably have to resort to using PuppyLinux, in order to run xf86-video-neomagic since laptops that had NeoMagic graphics chip typically did not support more than 256 MB of main memory.  I do not really have the option of testing the code on a desktop mainboard since I have never seen a stand alone AGP or PCI NeoMagic graphics chip based graphics card.  I am sure it existed back then (i.e., more than 20 years ago) solely for use by laptop PC manufacturers during laptop development. (i.e., reference design card)  This will mean that there probably will never be KMS support for it since you need a minimum of 512 MB for Linux kernel related development nowadays and 1 GB or more is preferred.

xf86-video-chips Version 1.3 released

Here is the announcement.  I do not really have a functional laptop that has a Chips & Technologies (C&T) graphics chip, so I have no idea if the code even works. You probably have to resort to using PuppyLinux, in order to run xf86-video-chips since laptops that had C&T graphics chip typically did not support more than 256 MB of main memory.  As I recall, there were some C&T graphics chip desktop PCI graphics cards, mainly for the embedded market.  I almost kept a PCI graphics with a C&T chip I purchased over at the now defunct WeirdStuff Warehouse (thanks Google for gobbling up all the real estate around Mountain View and neighboring cities like Sunnyvale), but returned it since the VGA output did not work without running through some kind of a special utility to turn on the VGA.  This particular card had some sort of FP support, and apparently, supporting VGA was an afterthought to the card manufacturer.  I may have bought a different C&T chip graphics card probably within a year of the WeirdStuff Warehouse closure, but I am not 100% sure about it.

xf86-video-mga Version 2.0 released

Here is the announcement post.  To be honest, the code for xf86-video-mga really should not be released at this point, but due to various reasons, I feel like it needs to ship out at this point.

  • There has not been a new release for a while (almost 2 years)
  • Already started the new release process
  • Rendering issues with EXA affecting some models were fixed

Just for disclosure, the code is currently broken with Millennium, Millennium II and G550.  For Millennium and G550, it appears that whoever wrote the EXA code completely broke the code for those devices. (i.e., segmentation fault)  Perhaps, disabling acceleration might workaround the issue for now. (did not have the time to experiment)  As for Millennium II, the rendering is completely messed up, so the code is completely useless.  I tested both PCI and AGP versions for Millennium II and G550, and the results are the same.

What this means is that this version’s code works mainly for G200, G400, and G450 at this point.  I do not have access to Mystique and G100 at this point, so the code is untested on these models.

xf86-video-i128 Version 1.4 released

Here is the announcement post.  I do understand that the hardware itself is about 20 years old, but at least I want the code itself to compile without compilation warnings, so I decided to release a new version.  The code can be compiled without compilation warnings even with the -Wall option added to the compilation script.

To be honest, I did not really test the code with the real hardware prior to the release.  I did purchase one PCI and one AGP version of Number Nine Imagine 128 from now defunct WeirdStuff Warehouse, but I do not have immediate access to it.  It is probably in one of my boxes full of graphics cards.  After all, there is a reason why this blog is called Brace Computer Laboratory blog.

Off topic, but considering what Google did to WeirdStuff Warehouse, I honestly will never, ever want to work for Google.  Not that that will ever happen since it is very, very hard to get interviews from them unless one graduated from a top tier university. (I have applied to several of their FPGA design related positions last year, but never got even a phone interview.)  Some of the area large tech corporations are becoming ever more predatory when it comes to gobbling up office space around the South Bay Area.  I am not against large corporations expanding, but please do so without disrupting long term tenants like WeirdStuff Warehouse.  I really hate seeing South Bay Area institution getting put out of business like this.

xf86-video-i740 Version 1.4 released

Here is the announcement post.  I cross posted it over at Intel-gfx mailing list.  I do not mean to bother Intel people, but hey, it is your company’s old chip, hence, the announcement post had to be cross posted there.

For those who do not know what Intel740 is (was), as of now (end of Year 2018), it is the only Intel discrete graphics chip to officially ship ever since Intel got deeply involved in the x86 PC system business with the likes of PCI bus and USB.  And yes, that was almost 21 years ago.  Intel is getting ready to get “back” into the discrete graphics business (again), possibly as a way to fill their expensive fab with silicon wafers.  As an amateur hardware industry watcher, I guess they must have sensed that they are not very good as a corporation at producing low cost and power products (i.e., smartphone SoC produced at TSMC), so they might as well concentrate on high power and margin discrete graphics products.  In general, it is understood in the industry that Intel is good at developing products on their own process technology using very high speed transistor. (although the transistors tend to be leakier compared to TSMC as a trade off)  Hence, its process technology emphasis matches the general direction the GPU industry is headed (i.e., use of GPU as a computation accelerator) where high performance is appreciated and high cost and power can be tolerated as long as the performance is outstanding.  These characteristics match Intel’s business model very well (i.e., x86 processor business), although they will have to be competitive overall against NVIDIA if they wanted to not lose money in it.

When I was an undergraduate student in EE (Electrical Engineering) some years ago, there was one lecturer who mainly taught graduate section of the department at this university.  I got to know this person because I took one graduate class with him. (a digital IC design class)  I was probably one of the five “American” (some “American” students were naturalized citizens who immigrated from another country) students out of 130 or so students who took the class. (Note: The class was had 2 sections with total of about 130 or so students. The second section with 30 people were added due to popular demand.  I was in the 100 or so first section.)  I did get an A- grade for the class, but it really did not help me get a full time job after graduation.  I would imagine all the non-American students got jobs.  Some years ago, it was almost a sure thing for foreign graduate students in EE or CS to get jobs, unlike us American students.  This really is not true anymore, especially since Year 2014 or so.  There are simply too many foreign graduate students in the EE or CS, and there are not enough jobs to go around even for them.  I have seen two new college graduate MSEEs each toiling in a municipality mandated minimum wage (i.e., higher local minimum wage) job at $13.00 / hour with Optional Practical Training (OPT; it essentially acts like a shadow H-1B visa program for up to 3 years for foreign master’s and Ph.D. NCGs.) after graduation.  They were working on a computational accelerator card that contains an Intel Stratix 10 FPGA with the business owner who did not have any hardware industry experience before getting into hardware business . . .  It appeared that they (all of them; the business owner and two inexperienced MSEEs) were struggling due to lack of experience and very low staffing.  On the other hand, a friend of mine who also took this class together and did the class project together is still stuck in an unending “contract” employment loop reserved for some unlucky American citizens in the computer / electronics industry.

Going back to the main story, the funny thing was that this lecturer was far, far more accomplished than pretty much all tenured and tenure track professors combined who taught at the university’s EE department. (Note: I did not go to a name brand top tier university for that matter, hence, strange thing like this happens if you get someone accomplished from the industry willing to work for mere 1/3 of the pay of the tenured professors as a hobby / quasi retirement job.)  This person actually worked on the Intel740 project as a senior manager of the project, and he has told me that C&T (Chips & Technologies) developed VGA compatibility circuitry went into Intel740 and subsequent Intel integrated graphics starting with Intel 810 chipset.  Please note that he was not really involved in the 3D portion, and his specialization was in the VGA / 2D portion. (note the C&T reference)  Anyway, even though PC industry historians (Is there such a thing?) considered Intel740 to be something of a failure (I personally do not agree with this view, but that’s what some people say.), it is the original roots of pretty much all Intel integrated graphics a lot of people use today.

Considering that I personally know someone who worked on Intel740 project, it feels odd for me to get involved in releasing xf86-video-i740 DDX for X.Org X Server.  I did not really write any code for that matter (only one small patch to suppress a compilation warning), but I did test the pre-release version of the code on Shuttle AV40 mainboard.  This mainboard supports Intel Pentium 4 and has a universal AGP slot, (i.e., an AGP slot with 1.5 V and 3.3 V signaling support) and for Intel740, you need an AGP 3.3 V or universal slot.  By the way, Shuttle AV40 has VIA Technologies P4X266 chipset and that’s the reason why I was able to test Intel740 with Intel Pentium 4.  (Note: Intel chipsets since Intel 850 chipset do not support AGP 3.3 V signaling.)

Actually, Real 3D (a joint venture between the notorious money wasting defense contractor called Lockheed Martin and Intel) had a graphics card called Starfighter PCI that used a PCI to AGP bridge and one or two SDRAM devices to act as texture storage, but it is a pretty rare graphics card, so I did not test the code on it since I do not own a copy of it.  I did test the code on Real 3D Starfighter AGP and an unknown Taiwanese vendor Intel740 AGP graphics card. (same one pictured)  Both of them worked fine.

Speaking of “an unknown Taiwanese vendor,” the same Intel740 senior project manager told me that around Year 1999, Intel dumped their unsold (i.e., unwanted) inventory of Intel740 in Asia (i.e., Taiwan, Hong Kong, and China, but not Japan or South Korea) for something around $5 / chip, hence there were so many variants of Intel740 graphics card coming out of Asia at that time.  Before integrated graphics became the norm of the PC industry, there used to be something called $40 graphics card business (some of them even sold for $30), and Intel740 was eventually relegated to that bargain basement bin of the mom and pop computer dealers along with unpopular models from S3 (ViRGE and Trio3D), SiS (6326 and 305), and Trident Microsystems (3DImáge9750, 3DImáge9850, and Blade3D).  Integrated graphics pretty much killed off this category in the PC component business for all practical purpose. (The category still exists, but the volume is very small today.)

Getting back to my blog post, because one of my focus on my Linux based OS development is getting standby resume to work properly (Perhaps, my only significant contribution to OpenChrome Project.), I did test xf86-video-i740 DDX to see if it can handle resuming from ACPI S3 State.  Interestingly, Shuttle AV40 mainboard supports ACPI S3 State (Suspend to RAM or STR), and when I tested standby resume on NVIDIA GeForce 2 MX (running Nouveau DDX and DRM), it resumed flawlessly, so I was confident that ACPI S3 State resume handling code inside the BIOS will work fine.

The verdict for xf86-video-i740 DDX standby resume handling is that it does not lock up the system, but some registers are not being restored by the DDX mode setting code, hence, the display becomes disrupted.  I have seen something like that with OpenChrome DDX, and I was able to fix the issue eventually.  Perhaps, if I am willing to spend the time on it, I may be able to fix it, however, I do not really have the hardware register documentation for it.  If someone has it, please e-mail it to me.

In practice, xf86-video-i740 DDX is not practically usable even if you wanted to use it.  Honestly, I do not really think anybody still uses it today on Linux / BSD.  This is comment is coming from someone who gets ridiculed regularly for working on old graphics device drivers. (i.e., Phoronix articles written about me over the years and some of its visitor comments)  The device is not practically usable because 2D acceleration has been turned off ever since XAA has been removed from X Server, and without acceleration, the rendering speed is shockingly slow.  You will notice this if you try to move a window on Xfce window manager. (i.e., Xubuntu)

Anyway, I decided to release a new version because I am little by little cleaning up compilation warnings from various old DDXs, and xf86-video-i740 DDX was in better shape than other DDXs, as far as compilation warnings are concerned.

I may work on fixing the standby resume issue eventually, but I do not know when that will be.