Received ADATA Ultimate SU800 SSD replacements back

On Thursday, I received the ADATA Ultimate SU800 SSD replacements in the mailbox.  ADATA gave me brand new units, and neither units have markings indicating that they were “refurbished” (hard drive manufacturers usually ship back a “refurbished” unit marked on the unit label).  Since I purchased earlier lot of Ultimate SU800 in late 2016, the box and unit label designs have changed with the newer lot.  There is a little bit of cost cutting going on like the summary of warranty information written on the back of the Box or not coming with a plastic spacer, but these are not that important to me.

I have not played around with the unit’s yet, but I must say that ADATA honored the warranty without any issues despite the fact that one of the unit’s (256 GB model) “warranty avoid if removed” sticker was damaged by the previous owner.  All I can say is that I am happy with ADATA regarding their RMA handling.

Advertisement

Getting back warranty replacements for my two ADATA Ultimate SU800 SSDs

I discussed my troubles with ADATA Ultimate SU800 SSD several times last year (here, here, and here), but I finally went ahead to send back the items.  I just got a notice from ADATA that I am getting the replacements back from them.  We will see if I get both of them warrant replaced since one of them (256 GB model) happened to be an open box item with the “warranty void if removed” sticker damaged by the previous purchaser (thanks Fry’s Electronics).  They will arrive in the next 2 to 3 business days.

Finally merged the new TTM memory allocator code for OpenChrome DRM

It took more than 7 months of development, but I finally merged the new TTM memory allocator code into the upstream OpenChrome DRM repository.  Actually there were not that many lines of code, but it took some time for me to get used to dealing with TTM, and it was pretty hard figuring out how TTM works.  I still do not really like TTM.  Also, it took me 2 months figuring out why the X Server was not booting correctly, and during that 2 months, I also spent most of the time for testing and releasing X Server 1.19.7 and cleaning up compilation warnings of several DDXs.

Moving forward, I will start concentrating on the following areas of OpenChrome DRM development.

  • Convert via_* labels to openchrome_* (easy, but there is a lot to go through)
  • Implement universal plane support for the mouse cursor
  • Implement atomic mode setting

Although I have not gone through exhaustive testing, the code stability appears to be about the same as the previous implementation.  For example, I was still able to handle dual head configuration on HP 2133 mini-note with the VGA output driving a 1680 x 1050 screen resolution monitor along with the regular 1280 x 768 flat panel.  That being said, the current implementation suffers from the hardware cursor getting corrupted from time to time (it goes away if you move the cursor around for several seconds).

The reason why I started implementing my own TTM memory allocator is to start adding KMS (Kernel Mode Setting) support to many PCI / AGP era forgotten, underserved graphics devices, and I plan to reuse the TTM memory allocator code for these devices.  If anymore has noticed, I have been going through cleaning up the compilation warnings of many of these graphics devices for a while.  Think of this as a preparation work, although I do not necessarily committing to a particular device since I have so many of them I will have to go through.

Anyway, this particular code development held up the OpenChrome DRM development for many months, and now that I am done with it, I plan to go through the above three areas for the next several months, along with implementing another DRM with KMS support.

OpenChrome DRM with the new TTM memory allocator is now able to boot Xubuntu 16.04.6 reliably

It took close to 2 months to stabilize the code, but I finally figured out why OpenChrome DRM with the new TTM memory allocator was not working correctly all the time.  I did announce that I got the code working back in January here, but after further testing, it turns out the buggy version of the code was able to boot the OS only approximately 1/7 of the time.  Rest of the time, the OS will not boot (i.e., it gets stuck).  Obviously, this is not at the acceptable reliability level for the code to get pushed into the upstream repository.

Because of the difficulty I faced in getting the code working properly, I spent most of my development time in February working with other projects like eliminating compilation warnings from underserved DDXs (here, here, here, and here) and releasing X.Org X Server 1.19.7 (here).  Figuring out how to install the compiled X Server and packaging the code was really difficult due to various bugs in the scripts and the lack of resources on how to do such a thing.

I came back to working on the OpenChrome DRM’s new TTM memory allocator after I got X Server 1.19.7 released.  I am still not done with the code.  I still have to clean up the code by jettisoning unnecessary portions, but at least I have a clear path towards getting the code into the upstream repository.

 

 

 

xorg-server 1.19.7 released

Here is the announcement.  My primary motivation for getting involved in releasing a maintenance release of X.Org X Server 1.19 is to fix EXA 24 bpp (bit per pixel) crash bug that affects older graphics devices that do not support 32 bpp rendering.  Not too many such older devices got EXA support, but SiS DDX happened to get the support prior to the main developer calling it quits.  For all practical purposes, it particularly impacts SiS 6326 since SiS back then (20 years ago) sold quite a few millions of them (they were often sold as a $30 to $40 low end graphics card at computer dealers to those who did not care too much about brand or performance).  In practice, SiS 5597 / 5598 and even older devices benefits from the fix as well.

While I proposed doing this back in early January 2018, and I did not really start working on the matter until late January.  That being said, but it took a lot more time and personal efforts to figure out how to handle the whole process once I started working on this.  Compared to releasing a DDX, it is far harder to build X Server, and the difficulty lies in figuring out what to do when one encounters errors.  The thing is, the release build script (xorg/util/modular) has several issues, and one encounters them when building the X Server, not DDX.  Observing that using the release build script from around Year 2016 to 2017 rather than the latest script was one trick that helped me in the process.

There are two parts to this process: figuring out how to install my own compiled X Server without wrecking my own OS installation and building the X Server code archive correctly.  To be frank, I did not really reach a point where I can build the X Server, and getting it working without some “hacks.”  I was indeed able to run the compiled X Server, but I had to rely on certain keyboard related components prepared by Canonical (i.e., manually had to copy keyboard related files).  Without getting keyboard related components properly installed, X Server will not run at all.  If you are trying to compile your own X Server, please keep this in mind.

While there did not appear to be much interests in releasing another maintenance release of X Server 1.19, I wanted to do this for those who plan to stick with X Server 1.19.  One bad habit of many FOSS (Free and Open Source Software) developers is that they often move on to developing the next version without fixing the existing code they released some time ago.  I tend to stick to something that works right now rather than chasing the newest code, and I am sure I am not the only one who thinks this way.

Regarding, X Server 1.19, I can possibly do a few more releases if other people have small fixes to the existing code.  As long as there is no ABI break, I think it is okay to enhance the code.  As for this EXA 24 bpp fix, I plan to apply it to X Server 1.18 as well, and the fix itself is applicable all the way to X Server 1.7.