It is fair to say that Novell struggled with moving from the IPX protocol to TCP/IP. Of course a big part of the problem was that IPX worked extremely well on LANs and IP brought absolutely no advantages for basic file sharing, only additional complexity. Specifically in DOS environments, a major disadvantage of TCP/IP was that it is far more complex to implement and therefore consumes significantly more memory.
But in many corporate and government networks, there was a strong push towards TCP/IP from the early 1990s, greatly accelerating in the mid-1990s when the Internet started becoming popular and very soon, indispensable. TCP/IP support became a requirement which Novell (or Microsoft for that matter) could not stop. And once TCP/IP had a foot in the door, there was understandable pressure to get rid of other protocols.
Novell’s first serious attempt at a solution was NetWare/IP (1993), or NWIP for short. NWIP was an add-on product for NetWare 3.x and later came bundled with NetWare 4.x. The trouble with NWIP was that it was relatively difficult to set up and manage, and heavily relied on DNS.
With NetWare 5.0 (1998), Novell implemented a different solution, often called Pure IP. The design of Pure IP was closer to IPX and used SLP (Service Location Protocol) to let clients automatically find the nearest server, just like classic NetWare did. Clients still needed some way to configure their IP address but by then, DHCP was widespread and unlike NetWare/IP, Pure IP did not need any special DHCP options.
When Novell ported their networking services to Linux in OES, Pure IP was the only option. While “proper” NetWare offered IPX support until the end, OES never did and Pure IP was the only game in town.
Note that Linux did support IPX in the past, and there were IPX-based NetWare clients and even servers.
For migrating existing IPX and NetWare/IP networks to Pure IP, Novell offered IPX Compatibility Mode Driver (CMD) which acted as a bridge between IPX and Pure IP networks. Of course CMD required NetWare and did not run on Linux-based OES.
DOS and Pure IP
In times of NetWare 5.x (initially released in 1998), DOS (and Windows 3.1) was still a supported client. Novell did more or less support Pure IP in DOS, but only with the final NLM-based Client32. Older VLM-based clients cannot support Pure IP at all (but can work with NetWare/IP).
Since Pure IP was new and DOS was at the tail end of support, Pure IP in DOS is a little iffy (while IPX works in Client32 without issues). Notably TRANNTA.NLM must be version 1.12 (May 1999), while the 1.11 (October 1998) version from the NetWare 5 client for some reason cannot seem to find Pure IP servers.
With the slightly updated DOS Client32, I was able to connect to a NetWare 5.0 (1998) server using Pure IP. However, I still had no luck connecting to an OES2 server (2005). The servers should be compatible but something was going wrong. I have a suspicion that the issue was with the SLP implementation–while NetWare uses Novell’s own, OES2 uses OpenSLP which is part of SUSE Linux, and it behaves a slightly differently.
But then I kept poking around and found out that although DOS Client32 support officially ended in 2002, the core of the client lived on for much longer in the form of the SRVINST2.EXE package.
This was a bootable DOS disk suitable for installing NetWare servers over the network. It was supported until the end of NetWare 6.5. A support article regarding SRVINST2.EXE was created as late as 2006. To my surprise, the disk created by SRVINST2.EXE had no trouble connecting to my OES2 server.
It appears that the trick is that it’s using a newer CLIENT32.NLM than the one shipped with the DOS client. Note that the client must also have SRVLOC.NLM loaded in order to find the server.
Conveniently, OES2 still comes with the DOS-based LOGIN.EXE and MAP.EXE utilities. Therefore the DOS client can automatically attach drive F: to the server’s SYS:\LOGIN directory and from there, login and map the server’s usual PUBLIC directory.
For reference, the modules loaded after the NIC driver (with FRAME=ETHERNET_II of course), with version numbers of known working components included, are:
- TCPIP.NLM (v1.01 981026)
- TRANNTA.NLM (1.12 990511)
- SRVLOC.NLM (1.17 980723)
- CLIENT32.NLM (3.03 001018)
The relevant NET.CFG section is:
Protocol TCPIP
IF_CONFIGURATION DHCP LAN_NET
PATH TCP_CFG C:\NOVELL\CLIENT32\TCP
BIND MY_NIC_DRIVER
The network must provide a DHCP server, but any old DHCP will do. Unlike NetWare/IP, no special DHCP options are needed, nor are additional infrastructure servers, although of course OES must run the SLP service, or be registered with an SLP server.
And after going through all this trouble… is it any good? Well, maybe. It does appear to work, but of course Client32 requires a relatively modern machine to be useful. It consumes very little conventional memory, but requires at least a 386 system—it is called Client32 for a reason and won’t work on a 286 or earlier.
Note that the DOS-based Client32 shares its core with the Win9x-based Client32. For example the Win9x client comes with a handy SLPINFO.BAT utility that can be used to diagnose problems with SLP. The DOS client does not come with SLPINFO.BAT, but SLPINFO.BAT from the Win9x client works just fine in DOS.
For classic DOS networking, IPX can’t be beat–it runs on an 8088, it can run on very old DOS version, it uses only a small amount of memory. But running a relatively modern OES Linux has its advantages, and being able to connect to it from DOS may be handy.
NetBIOS Over TCP/IP
It is instructive to compare NetWare with its biggest competitor, Microsoft and IBM SMB networking based on NetBIOS. For reasons that may be an accident of history, NetBIOS over TCP/IP (NBTCP) was standardized very early, with RFC 1001/1002 being published in March 1987. That was before LAN Manager 1.0, before OS/2, before IBM even considered it worthwhile to support Ethernet.
Thanks to the layered implementation, it was possible to implement NetBIOS over TCP/IP as an add-on for both DOS and OS/2 clients. By 1992, Microsoft LAN Manager servers and clients came with bundled NetBIOS over TCP/IP support. IBM sold TCP/IP separately but NetBIOS over TCP/IP kits were also available. Windows NT (1993) came with NBTCP support built in from the very beginning, and so did Windows 95.
To put it in context, NetBIOS for TCP/IP is old enough that Microsoft and IBM shipped their own implementations for OS/2 1.x, whereas NetWare/IP appeared after NetWare 4.0. Pure IP appeared at about the same time as Windows 98—years after Windows 95 and NT 4.0.
Because NetBIOS over TCP/IP was defined by RFCs, third parties (for example Excelan or HP) offered support even earlier–obviously the RFCs were not published in a vacuum and several implementations existed in the late 1980s.
All this meant that by the time the Internet became important in the mid-1990s, NetBIOS over TCP/IP was already well established, with good support from Microsoft, IBM, and others. Although NBTCP could use relatively complex infrastructure, it didn’t have to, and switching existing servers and clients from the NetBIOS Frames (NBF) protocol to NBTCP was generally not complicated.
By the time Novell came up with Pure IP, NetBIOS over TCP/IP had been around for a decade. That made switching to TCP/IP vastly easier in the Microsoft/IBM networking world. For example, Microsoft’s 1995 DOS network client continued working with future Windows servers for many years to come, whereas Novell’s 1995 clients have no idea about Pure IP whatsoever.



Interesting that NetWare had the same service discovery “struggle” that Apple did with AFP-over-TCP. The original AppleShare IP clients, like Novell, relied on SLP for service discovery if the network wasn’t running “dual stack”. Thing is, SLP discovery didn’t work with Chooser in the same “it just works” fashion that AppleTalk NBP (or NetBIOS) did. Which is very “un-Apple”.
Most companies stuck with running AppleTalk side-by-side with TCP/IP for this reason (much to sysadmins distaste)…. that and all the old LaserWriters that only supported the older protocol. Its too bad that mDNS took so long to become available as it finally solved all these issues.
It’s like in the rush to switch to TCP/IP, nobody thought too hard about how to replicate the incredibly useful ability of older LAN protocols to function with zero configuration and automatic discovery. As if that didn’t count for anything. There were so many efforts to implement something like it on top of TCP/IP that it’s not even funny. The mDNS RFC is from 2013 — only about 20 years too late.
I think it’s fair to say that in a small office or home LAN environment, forcing everything onto TCP/IP solved zero problems but added quite a few.
The thing is by 1994 or so the Internet was really happening and making TCP the primary protocol and ensuring that network clients were running an IP stack simplified deploying ftp ,terminal emulators, and web browsers.
Trying to get home users to configure things like Trumpet, or pick your favorite DOS packet driver suite was a huge pain, it was not much better for small remote offices where the Enterprise did not have local IT people, often it was just cross ship PCs as practical matter, to get the software configured. If by small office you mean small business you are probably right but Enterprises that had a lot of small offices had lots incentives to move to IP.
Whatever value was lost in easy peer-to-peer file sharing, and printer discovery, was more then gained in knowing you could send out a disk with the latest Lotus Notes and tell people they could just click the the install, and start it up. Same thing for getting people connected to all the legacy host systems. All those 3270 terminals in the shared office suite could go away and get replaced with IRMA or other popular terminal emulators on every desk, but again you needed all the clients on IP for that to work.
Yes, what I had in mind was a small company with ~20 employees and a single office. They still wanted the Internet but beyond that, TCP/IP didn’t really solve anything.
What I remember is that Novell implemented Directory Services in 4.0 with NDS I want to say in 1994 or 1995, anyway before Microsoft introduced AD in 2000. Which kind of explains its dependency on DNS, being possibly the most efficient way to navigate LDAP structures.
I thought Novell was still leading the market until about 1999, but the Internet bubble didn’t help. I do remember encapsulating IP Packets in an IPX header back in the 3.X days because very few companies had internal IP networks and IPX while chatty was routable. They did implement a real IP stack in Netware 5.1 but it was too late by then, plus they already ported NetWare over to Linux.
Lots of large companies used IPX/SPX for multiple locations, like I said it was chatty but in the early and mid 1990s no one was using the Internet as much. You needed something just go to the companies Bulletin Board Site if you had the number and later their FTP site.
I don’t think it was the internet that hurt Novell, but more so Databases and their need for a platform agnostic protocol, NetBIOS sure wasn’t it. I think Microsoft was a better fit for TCP/IP because it didn’t have a routable protocol of its own.
But hey, don’t try to take anything away from my one true ClientServer NOS unlike those other Peer-2-Peer guys. lol
As far as OES and SuSE Linux it’s still around and doing fine now under the umbrella of OpenText and yes NSS Volumes still exist.
NDS came with NetWare 4.0 in 1993, but I don’t think NW 4.x was really used much before 1994-1995. Years before AD in any case.
NetWare actually had a TCP/IP stack built in since version 3.11 (1991) but it was initially only used for add-ons like SNMP and NFS (both NetWare running as an NFS server or NetWare acting as an NFS gateway). Then came NetWare/IP, IP tunneling, and a couple of other tries. Pure IP only showed up after Microsoft already won.
You’re right that IPX was routable, and networking gear of the day could deal with it, plus Novell had their own MPR aka Multi-Protocol Router. Microsoft was in a better position since NetBIOS over TCP/IP existed before Internet became a thing, so for them going all TCP/IP was easy and built in since NT 3.1 and Windows 95. Whereas Novell initially tried to charge a lot of money for NetWare/IP and such.
IIRC, Novell has some interesting articles regarding TCP/IP back in their magazines. Rather negative obviously but probably information that would be useful to explain why Novell made the choices it made.
I think Novell made two major mistakes during the early 90s. First, they overpaid for both WordPerfect and Digital Research in order to compete with MS in fading markets. Second, Netware 4 was pushed out before it was ready which encouraged potential purchasers to give another look at NT.
I remember when Novell was trying to push “TCP/IPX” to avoid dual stack with their IntranetWare gateway. The page is still up: https://support.novell.com/techcenter/articles/ana19960902.html
The client software was terrible. It hooked into Winsock and launched every time something opened a socket (transparent IP encapsulation it was not). For some reason my high school did a greenfield install of this in 1997…. it didn’t last very long and was replaced with a dual stack setup and a SOCKS proxy a year later.
TCP was really not meant for LANs. You get nothing for all the extra complexity. It’s not a coincidence that for example NFS ran over UDP for a very long time.
Yes the acquisitions were clearly a terrible idea — WordPerfect acquired in 1994, sold to Corel in 1996, it clearly was not working at all.
I did my first Novell 86 network in 1986. 3 nodes and a non-dedicated HP Vectra (286) file server that ran MS Word for DOS on ArcNet. I remember going to a class about protocols around 1992 or so. TCP/IP was mentioned as government protocol that really wasn’t used in business and probably would go away.
Is it too late to get my money back?
I attended a TCP/IP class given by IBM in the early 1990s. It was described as a rather otherworldly thing, obviously not as entrenched as SNA and NetWare but worth learning about. In retrospect, I couldn’t have really picked a better class to attend.
Netware’s implementation on Linux always felt incredibly lazy. They could have fully supported IPX… they just chose not to. They should have recognised part of their niche was people with legacy networks who didn’t want to go out and update clients and just leave things working, but upgrade their server to something a bit more modern.
If you look at late 1980s trade press, it was always OSI this, OSI that. TCP/IP was considered a placeholder until the mythical OSI networking standard arrives. Eventually people realized that TCP/IP more or less does what they need, and it’s already there, so why not use it.
Yeah, on a simple LAN IPX can’t be beat. I wonder if part of the problem was that Novell didn’t fully appreciate that if they’re forcing users to switch to a new client, they’ll consider other solutions as well.
That said, I can fully understand why Novell wanted to switch to Linux instead of maintaining their own server OS.
Wasn’t the Netware OS legendary for its stability and uptime? That being said, NetWare supported speaking AppleTalk to Macintosh clients with the optional “NetWare for Macintosh” package. It even exposed file and print shares using AppleTalk native protocols. Why they couldn’t do this earlier with TCP/IP is beyond me.
Did x86-64 Linux ever support IPX?
A completely off topic comment, but:
I just want to let you know that MS SQL Server 1.x for OS/2 was recently added at Winworldpc if you or anyone else want to take a look at it.
(I would think that MS SQL Server and LAN Manager were the two major server applications that ran on OS/2).
(Now I’ve actually read the blog post and all comments)
Interesting topic!
Did anyone else than IBM/Microsoft and the open source Samba people actually implement NetBIOS over TCP/IP?
I.E. as anyone writing their own code?
Afaik the HP offering was their adaption of Lanman/X, and Lanman/X was also the basis for Pathworks (running both on Ultrix and VMS).
Or did Lanman/X perhaps not include Netbios over TCP/IP but just the SMB server part, and each vendor did their own Netbios implementation?
Re RFC 1001/1002: It seems like one of the things Microsoft were great at was keeping an eye on what happened outside their customer sphere. I think this was probably a major reason for RFC1001/1002 happening. Worth remembering is that Microsoft sort of came from a mini computer background, with their BASIC interpreters written in assembler that they assembled on a PDP (or at least that was the case for their 6502 Basic). Thus they had been exposed to minicomputers and would most likely somehow have an understanding that sooner or later great features from minicomputers would come to microcomputers, like multi tasking, memory protection, virtual memory, the same computer being able to both be a server and something the end user uses directly and so on. Combine this with their ventures in Xenix and I bet they knew that there was a high enough likelihood that TCP/IP would catch on that it was worth creating those RFCs. Don’t know happened at IBM at this time, but they likely had an eye on TCP/IP too.
Re Novel buying up various companies and whatnot in the 90’s:
I would think that once Windows for Workgroups came out it became obvious that the future would be operating systems that didn’t need any major additional software to act as a client to file and print shares, and would be able to act as a server for smaller companies, and also computers would rarely run DOS without Windows or possibly something similar. Thus the market for commercial third party peer to peer networks and also network software that required a special client with it’s own network stack would be dead in a few years, and they probably just threw money at various things like Wordperfect to test out what they could do.
Also Windows NT would had shown that the future for operating systems where there were a major difference between a server for a large group of users v.s. what you ran on each client would also be dead. And since there were no end user / desktop software that could run on Novels Netware server it was a dead end, unless they would had gone through a huge effort to implement some API making it able to run other operating systems software, like Win-OS2, and that was most likely not worth the effort.
It’s somewhat surprising that Novel didn’t their directory / user rights management things over to Windows NT and somehow made it replace Windows built in user rights handling, selling it as a way to switch over to Microsofts OS and protocols while still keeping their customers in a vendor lock in with their user databases in Novels software. Sure, eventually Active Directory would had killed that too, but I bet that if it just was an add-on you’d add to each NT domain server they might had been able to sell it for many more years, and since Microsoft had then earned money from selling the actual OS anyways they would hadn’t been competitors as much as they were in how things actually were, and thus Microsoft might even had encouraged this.
It seems like Microsoft really understood that a prime target would be to keep their customers having their user databases in Microsoft products. In particular if we look at Services for Unix AFAIK it came with NFS software but it was intended to use NT as the user database, not whatever NFS usually uses in the Unix world.
Re Appletalk/appleshare:
NT Server also came with an Appleshare server, and all NT versions came with the Appletalk protocol. (Not sure what the intent was with shipping the protocol but not the file sharing with NT workstation, perhaps as a way for users to write and test custom Appletalk software on their workstation rather than needing to test on a server?). From what I’ve read (anecdotal evidence a few users here and there at various places online) the server worked great.
Re what protocols are suitable for local networks:
Were Netbeui/NBF really any worse than any other protocol for a place with say 5-20 computers?
But also: Every time I have a look at anything related to networking in DOS, I get a strong feeling that at least Microsoft were somewhat specification/committee driven, I.E. they decided to have certain APIs and whatnot and then they implemented it and it ended up taking up lots of memory. Compare for example with the modern vintage computing oriented mTCP.
If they had went with SMB directly over TCP rather than SMB over Netbios over TCP they could likely had saved a bunch of memory, and the few applications that needed the Netbios API (maybe clients for their SQL server???) could had had an additional shim that wouldn’t be needed for normal file and printer sharing.
If I had an infinite amount of time to spend on vintage computing things, I would probably put “writing a SMB client that runs directly on TCP rather than Netbios over TCP” to the to-do list. It would probably be larger than the smallest possible network clients, but also most likely way smaller than when using TCP/IP with the Microsoft DOS clients.
Re who needed TCP/IP in the 1990’s:
In addition to actually connecting to the internet, it seems like Apple/MacOS was more or less the only major other computing platform that could somewhat easily network with PCs using anything else than TCP/IP.
DEC Pathworks implemented NetBEUI / NBF so there were a way to communicate with DEC stuff, and DEC also made a DECNET implementation for PCs. And then were of course the IBM mainframe stuff. But there seems to have been more or less no Unix related non-TCP/IP stuff, and very little for other minicomputers or non-PC microcomputers that would work without TCP/IP.
Not that it would be super common to mix various systems, but it would still had been a serious use case.
Btw re what Linux supported: It not only had IPX in the kernel up until recently (or maybe it’s still there?), but it also had SPX support up to IIRC the end of the 2.4.x kernel series, I.E. in the mid 00’s.
Maybe Novell didn’t want to rely on IPX/SPX implementations that they hadn’t written, and/or anyways wanted to move away from IPX/SPX?
==============
A general PC/microcomputer LAN/network question:
What network applications were in use in addition to file and printer sharing, and databases?
I.E. unless you had a minicomputer, workstation (or a mainframe) or whatnot, what else ran over networks?
I think you’re grossly overestimating Microsoft’s influence on 1980s networking. NetBIOS was an IBM thing. SMB was an IBM thing.
If you look at RFC 1001/1002, there is no mention of Microsoft and no obvious hint that Microsoft was involved at all. The author of the RFC was pretty clearly Excelan (later acquired by none other than Novell).
Putting TCP/IP onto a 1980s DOS machine was only done in desperation because TCP/IP was complex, memory hungry, and far more involved than existing LAN protocols. Someone had to assign IP addresses. All clients had to be correctly configured with the right netmask and routing information, as well as name resolution information. DHCP didn’t exist before 1994. Compare with IPX or NBF where client configuration ranged from assigning a unique name (NetBIOS) to nothing whatsoever (IPX).
LAN Manager/X and XENIX may have had something to do with RFC 1001/1002. For a while, Intel pushed OpenNET which was SMB networking.
The TCP/IP stack that Microsoft shipped with their DOS LAN Manager clients pretty clearly came from HP. Excelan had their own DOS NBTCP implementation, and I believe FTP Software and others did too. IBM’s early NetBIOS over TCP/IP kits for OS/2 seem to have been written by Dan Lanciani, presumably under a contract.
By 1990, NetBIOS over TCP/IP was clearly a thing but I don’t know who the primary customers were (USG?). Microsoft was also clearly not the driving force — for example, LAN Manager 2.0 (1990) did not come with NBTCP support, only LM 2.1 did.
With IBM it’s even harder to tell, because NetBIOS over TCP/IP for OS/2 was a separately orderable kit available since circa 1990, but it wasn’t bundled with the OS until Warp Connect in 1995. IBM also had their own NetBIOS over TCP/IP kit for DOS. I’m not sure of the exact timeline but sometime around Warp Connect IBM also rewrote the NBTCP implementation such that it lived in a kernel module (rather than a userland process) which significantly improved performance, and strongly hints that in the mid-1990s NBTCP became actually widely used and was more than just a checkbox item.
My experience with NBF is that it works perfectly fine on a LAN. I find Novell’s clients easier to set up and use, but that’s not really a function of the underlying protocol.
Re NetWare — the idea of a dedicated server was sound, and people weren’t really going to run random applications on it. NetWare was tuned for high I/O bandwidth and IMO had some advantages over a general purpose OS, but I suspect keeping up with new hardware developments became too much of a burden over time. I can’t say how much of a difference it made (or not) that OES could run all standard Linux server side stuff.
I remember the mars_nwe project. As a university student I was doing IT support for a department which included Windows 3.11, Netware, Solaris, OS/2, and very early Linux machines. I wasn’t allowed to play with the Netware stuff for obvious reasons but I thought I’d give the mars_nwe thing a try on my linux desktop. I configured it and mapped a drive from a Windows machine and went to class.
When I came back there was a note from campus networking saying “do not plug in network. Call x-xxxx”. Apparently the emulator folks didn’t implement the “find nearest server” bits correctly and every client on campus (and the campus 50 miles away!) thought my desktop was the closest and tried to use it for login (which would fail), blocking everyone out of their Novell files.
Luckily I wasn’t fired but I was more careful when trying out experimental software.
Re: Chris M. — why couldn’t Novell continue with their legendary super-stable OS. The answer is simple: SMP.
Novell Netware was legendary for stability, etc — but only as long as you run well-behaved and well-tested NLMs on it.
Add Apache, PHP, MySQL (yes, they existed!) — and stability goes to hell. It’s simply impossible to provide stable environment without proper memory protection and OS design.
The only OS to survive transition from UP world to SMP world was Unix — because people started experimenting with it decades before it was practical to sell it as a commercial product.
All others… recall fate of Symbian, Windows CE and others… the window of opportunity where non-SMP OS may successfully compete with SMP-capable OS is around 2-3 years and development of SMP-capable os (be it a new development or adoption of existing design) is around 5-10 years… dates simply don’t match except if you start well, well, WELL in advance… like Microsoft did (with Windows NT, but, notably, not with Windows Mobile), like QNX did, like few others did — but not Novell who spent time doing crazy things with DR-DOS, WordPerfect, etc.
NetWare had functioning SMP support since circa 1995, more or less the same as everyone else, long before SMP hardware went mainstream. I can’t say how much SMP was a factor.
I agree that NetWare did not transition well to the AMP (Apache, MySQL, PHP) world. It was designed for a relatively small, trusted core of code to run on the server, not for a giant pile of ever-changing and sometimes questionable code. I assume someone at Novell did the math and concluded that yes, NetWare could be made much more like Linux (with a POSIX “userland”)… but at that point, why not just use actual Linux? From that perspective, I think porting key Novell technologies (NSS, eDirectory) to Linux was a very sensible move.
When talking about crazy things, don’t forget UnixWare. Which I think may have had SMP support a touch earlier than NetWare.
I started working at the time Netware began to be replaced by alternatives, mostly NT/2K. Some customers’ servers were still on v3/Bindery, many on v4/NDS. I remember very few migrations to v5, my “awe” at seeing the Java desktop running on the server instead of the monitor.nlm cpu-usage snake that I was accustomed to wasn’t shared by the customers buying it. But at the same time, what the server desktop allowed was slow and inconvenient, most if not all of the management was done on the clients as before. I believe Novell was trying to fill the gaps the competition (Microsoft mostly) had, a server with a GUI and IP, but those felt bolted-on, complicated and almost useless. I still remember always having to keep IPX running even on networks that could’ve been IP-only, it “just worked” for Netware. Regarding the SMP support, I’ll have to re-read manuals but from memory, Netware used to support SOME SMP, initially it could offload some IO tasks on weird mobos with 486+386 CPUs (I still remember the papers about how the elevator algorithm was way better) but the problem is most if not all NLMs were built for the “proper” Netware environment without many of the modern OS process protection and isolation technics that made the OS more resilient but also was slower for the user. I remember testing pure Netware vs NT file transfers speed on the same hardware and Netware was always ahead, but because all of it’s code was running almost with no protection on the server and if an NLM misbehaved the whole server would ABEND (ABnormal END). To the credit of the NLM coders, it was really really difficult to see abends, it used to be rock solid as a file server running with really high uptimes, but when CPU/IO speed became less of a bottleneck all the OS features/protections that Netware lacked became more noticeable. It was probably doable to bolt-on posix for *nix/AMP customers but those were too separated from what Netware was meant to do, web developers used to know nothing about Netware, and Microsoft was growing at its fastest pushing IIS at the time. It was sad to see Netware die, but it was probably inevitable, it was too much “D.O.S.” in a way.
I think that the slow speeds of the Netware Java based tools put me off Java for a decade. The security issues kept me away.
I couldn’t understand why Novel thought that those tools were preferable to the C-Worthy interface tools they had.
One of the things that killed Netware with the small insurance companies was the dropping of ERP support. They needed to support NT and they couldn’t afford to do both, so the pushed their clients to NT Server. That was about 2001 when I decided to refocus my career in a different direction. My job was quickly being reduced to fixing laser printers, even with my NT 4 MCSE.
Java was what everyone was talking about. Java applets were reasonably fast. No one really understood how slow Java got with an application that didn’t readily fit in memory. Another case of committing the entire business to a direction before having a working prototype.
Actually I think the problem was exactly that there was a working prototype. But a prototype is not the same thing as a finished product, and often people are unable or unwilling to conceive just how far from a finished product it is.
OS/2 had some notoriously horrible Java based software in 4.5 such as the GUI LVM manager. Took lots of RAM, took a while to start, GUI was clunky and awkward and not quite CUA compliant.
That was a really weird thing to do. It made no real technical sense and caused odd problems like “well you can install Java 1.3 but if you want to run the LVM GUI, you also need 1.1 installed”. Quite likely it was an entirely political decision to do it..
The TCP/IP config interface was also redone in Java, all the stranger because a native OS/2 implementation already existed.
Side track rant:
In theory the idea of using Java was great. You could in theory run things on many different platforms.
But IMHO it’s incredibly how SUN (and later Oracle, if it happened after Oracle had bought SUN?) messed things up by not having all APIs fully backwards compatible across versions.
The result seems to be that us vintage enthusiasts needs (physical or virtual) machines with different combinations of Java versions and possibly web browsers and operating systems to be able to run all Java based web interfaces and whatnot.
If it only affected vintage computing enthusiasts that would be one thing. We had exactly the same kind of trouble at work with Java based LOMs (Lights-Out-Management) on circa 2010 servers. Write once, run nowhere is indeed not a great selling point.
From what I recall, the biggest problem was not APIs per se but security. Old implementations were declared insecure and getting newer Java client versions to talk to older servers was between quite involved to impossible, and the simplest solution was a VM with old enough Java + browser.
To be fair, communication which does not exist is indeed 100% secure and unbreakable.