Open Source

Linux needs to play its own game and quit comparing itself to Windows by D Colbert

I'm fairly frequently accused of being a Microsoft Fanboy or Corporate Stooge for what it seems many Linux fans mistake as an "anti-Linux” philosophy and outlook. In my own defense, I don't think I am "anti-Linux" I think the main problem is that the Linux community pictures itself as a looming threat directly to Microsoft "dominance".

Frankly, Linux spends too much time directly trying to compare itself to Microsoft in areas where Microsoft dominance is virtually insurmountable, and where Linux, despite significant advances, finds itself (relatively) lacking. If I were a politician, I wouldn't campaign by claiming:

"My opponent claims that I am inferior to him in foreign policy, and while this may be true, I have made SIGNIFICANT advances in my foreign policy skills. Certainly my opponent is the BETTER choice for foreign policy, but hey, I'm a NICER guy, and even if I'm not as good, I'm much better than I used to be".

Unfortunately, that is exactly what Linux does when it tries to compete directly with Windows on the desktop OS platform in areas like Plug-and-Play support for the widest variety of consumer accessories. I think it stings the Linux community to hear things like this put so bluntly.

I also think the Linux community has a natural desire to avoid facing these facts, and that as a community, it would often rather embrace a fantasy world where Ubuntu is really a compelling alternative to a Windows OS that could threaten to disrupt Microsoft dominance. Speaking the truth in the Church of Linux is an excellent way to get ex-communicated. But then again, people have been running into trouble by speaking the truth in religious organizations for centuries.

And therein lays another problem. I don't think the Linux community lies, but I don't think they make accurate or fair comparisons. Whenever I hear figures on the number of datacenters with huge Linux deployments, I always wonder where all of these data-centers are.

With 15 years of experience, I've seen Sun Solaris heavy data centers, I've seen HP-UX heavy data centers, and I've seen LOTS of big corporations with huge Windows deployments. But I've seen very little Linux, scattered here and there, and often in strange, supporting network and infrastructure roles.

Another (ironic) example I use is that when you see a giant LCD on the Vegas strip that has crashed, it isn't LINUX you see underneath it. When you're cable TV guide has crashed, it isn't Linux. When you're in the Build a Bear workshop and a PC is down, it isn't Gnome that the app crashes back to. And that isn't about Linux versus Windows reliability - it's about applications, and we all know Linux apps crash as frequently (or more than, in the case of KDE) as Windows apps.

Some people claim that I have conformational bias - that I see more Windows because I work in a Windows-biased segment of the industry. But I think it is clear, more than 9 out of 10 PCs you run into in the private and public sector outside of very narrow niche industries are going to be Windows-based. The number of Linux machines becomes grossly inflated through several methods, many of which do not compete with Microsoft technologies and actually assist Microsoft while keeping Microsoft's real competitors tied up worrying about additional competition.

For example, I've got a feeling that the Linux numbers we see count every embedded Linux device - consumer or otherwise - available on the market. I think the Linux community takes these artificially inflated numbers and compares them to Microsoft's Windows desktop and data center numbers. I think that Linux numbers we see count every download by every curious user who may or may not ever get around to installing their download on an actual physical machine.

Most importantly, once we get down to the nuts and bolts of it, I think Microsoft does very little competition on the infrastructure side of the enterprise, such as DNS, Active Directory, DHCP, and Web/IIS. You might also count FTP and e-mail.

It occurs to me that Microsoft really doesn't care when Linux "grows” enterprise market share, because it is the OS under the interface of the new IBM XIV storage solution. Effectively, Linux is an appliance in this role, and the LINUX is not as important as the role of the appliance.

Often, the Linux is so transparent that as a data-center administrator, you may not even know that Linux is in your data center. There is no conceivable way this is a threat to Microsoft. It isn't even really fair to use those numbers when comparing Windows installs to Linux installs. I said to a friend, regarding the Tom-Tom lawsuit,

"Linux is very popular, as long as no one needs to know it is there”.

It's great that Linux can have this space and compete with products like Win CE/Mobile/Phone - or be in the enterprise providing supporting infrastructure roles that Microsoft doesn't want to compete with in the first place. Linux becomes a foundation on which companies transparently build other turn-key products as their core business for retail consumption.

Windows can be used in this role, but is generally used in a far different business model - as a component of productivity-enhancing business machine solutions running a variety of off-the-shelf, user-selectable applications. But when the Linux community speaks of "competing" with Windows, they're really talking about this second model - and it isn't, it absolutely isn't, the place where Linux performs the strongest. This is just plain silly. Who ignores what they do better while trying to compete on what they know they do worse? Republican candidates, in the last election, that is who.

I have Linux in my Windows environment. Cisco Call Center Manager for VoIP is a Linux-based utility. I'm in very critical discussions right now that will likely result in a migration from EMC to IBM network storage solutions - and that solution is Linux-based. In both cases, I do not need to know how to compile kernels, how to grep, how to use package managers, how to use VI to edit config files in the /etc directory. As a matter of fact, with the IBM XIV solution, I couldn't do that if I wanted to.

These solutions do not displace a single Windows solution in my environment. They facilitate better delivery of my Windows-based solutions and improve the experience for my end users. The IBM XIV solution is called disruptive technology, and it may very well be.

Linux is the foundation of a disruptive technology that threatens a current giant in the technology marketplace. Unfortunately, it isn't Microsoft, it is EMC - and I imagine that Microsoft has no strong feelings one way or the other about that. It is certainly a huge win for Linux, and it illustrates that Linux has a viable place in the technology sector. What it does not do is show that Linux is, or ever will be, a threat to Microsoft dominance of the back office, plus corporate and end-user desktop.

Ultimately, I am not married to Microsoft technologies. I support Microsoft because that is where the significant amount of business demand is. In my own data center, I'll deploy the best, least expensive, most reliable solution in every case, regardless of what OS platform underlies that solution.

So, I am certainly not vocally anti-Linux. I'm opposed to forcing a square peg into a round hole because you think the square peg is actually the superior round peg, or because you think the round peg maker isn't a very nice businessman, or because you think that the philosophy of the square-peg community is more forward-thinking than the philosophy of the round peg community.

I don't like it when musicians and actors preach science and political philosophy to me. I don't want my business solutions to be based on making productivity sacrifices that make me less able to compete.

Ultimately, I think this is what Linux versus Windows debates come down to - are you open source and willing to sacrifice in order to support open source, or are you in business to make a profit and be the most competitive force possible in your market? Political, ideological, and philosophical differences are at issue here, because clearly, Windows solutions dominate and are superior for desktop, back office, and application-hosting solutions on the enterprise.

'Why enterprise networks run Windows, not the Mac by Zack Whittaker

Have you ever considered why the Mac is so well suited to the home environment? Has it ever occurred to you why the Mac is great with multimedia, with fun stuff like graphics and movie editing, but not so great at the serious stuff? With these, have you ever pondered as to why you very rarely see a bunch of Macs in an enterprise, corporate setting?

10 best features of Ubuntu 13.10

ubuntu-13-10-saucy-salamander.jpg(@ TechRepublic) Ubuntu 13.10 (aka Saucy Salamander) is about to hit the streets, but not without much controversy and drama following behind in its wake. In fact, never before has their been a distribution release so mired in upset. Beginning with the choice to move away from the Wayland X server to a Ubuntu-specific Mir server to the inclusion of Smart Scopes, Ubuntu 13.10 couldn't catch a break. However, after using the release candidate for a while now, I'm here to say Ubuntu 13.10 enjoys more polish than any current Linux release. Outside of the many bug fixes and updates, I can give you ten reasons to like the latest version.


1. Smart Scopes

One of the biggest issues surrounding Unity lately is Smart Scopes. Think of this feature as an all-encompassing search for your desktop. Open the Dash, enter a search string, and you'll get results from one hundred sources. Search results include: Local disks, UbuntuOne cloud, Amazon, Wikipedia, UbuntuOne Music Store, Youtube, social networking sites, and much, much more. Of course, the big issue with Smart Scopes is that it transmits your search results, which some consider a security issue. For those that don't like it, it can be turned off. For those that do – you'll be amazed at how powerful a search tool can be. Personally, I fall into the latter category and use Smart Scopes every day.

2. Ubuntu One install login

During the installation of Ubuntu 13.10 you are prompted for your UbuntuOne account credentials. Although this doesn't really change much in the end, what it does is streamline the overall installation process – especially for those that already have a UbuntuOne account. For those that don't, they'll be made aware of the service and (hopefully) sign up for one of the most seamless cloud storage services available.

3. Keyboard language selector

If you happen to use different keyboard layouts for different tasks, you are in for a treat. With 13.10 comes a notification button that allows you to quickly switch between those keyboard layouts with a click of the mouse. You can even set up keyboard shortcuts to switch between your different layouts – making use of multiple languages incredibly simple.

4. Compiz performance improvements

Many users of 13.04 and earlier iterations found the performance steadily improving, but still lacked a certain zip to the opening of the Dash and other interactions with the desktop. With Saucy, the improvement of Compiz is noticeable. The speed at which the Dash opens is definitely a step forward in bringing Unity in line with the faster desktops on the market. With these improvements, the desktop no longer feels sluggish on any front, nor does it have any of the holdover flakiness of previous releases. Some of these improvements are a combination of Unity and Compiz – but much of the performance is thanks to Compiz updates.

5. In Dash payments

If you're looking for the means to quickly purchase items from various online retailers, Ubuntu 13.10 brings to you In Dash payments. Open up the Dash, search for an item, right-click the item, and click Buy. Clicking the Buy button will then launch the default web browser to that item's web page where you can purchase the item. What is nice about this is it will allow you to do a bit of price comparison – when, for example, a multi-media download will show up in Amazon and UbuntuOne Music store. Pick the right price and purchase.

6. Kernel 3.11

There are tons of tweaks to the new kernel that focus on performance. One of the major changes is zwap, which alters the way swap space is used. According to the zwap documentation: “...zswap basically trades CPU cycles for potentially reduced swap I/O. This trade-off can also result in a significant performance improvement if reads from the compressed cache are faster than reads from a swap device.” Also included in the new kernel you will find: AMD DPM support, low latency network polling, KVM/Xen (for 64 bit ARM) support, better AMD Radeon support, and much more.

7. Radeon UVD support

Out of the box, Ubuntu 13.10 should include support for Radeon UVD (Unified Video Decoder – which deals with hardware decoding of H.264 and VC-1 video codecs). Prior to this, a number of tricks and hackery was necessary to get this system working. Not so with Ubuntu 13.10. Although much of this support is due to the kernel, Saucy Salamander should go a long way to making this much easier to deal with than previous iterations.

8. LibreOffice 4.12

The flagship open source office suite continues to get better and better with every release. With this release of LibreOffice, all of the new features that arrived in 4.1 finally have that polished look and feel they've desperately needed. One of the biggest improvements is the menu system. If you don't use the HUD (which you should), you will find the standard menus to respond far better than with 13.04 using LibreOffice 4.1. This improvement alone makes the upgrade worth your time (especially if you are a LibreOffice power user).

9. Easier server connection with Nautilus

One of the things I like about Nautilus is the ability to hit Ctl-L and enter the address of an SMB share on a network. Now there is a simple icon (in the left nav) that allows you click and then enter the address of the server. Although you still have to enter “smb://” followed by the IP address, it's still more intuitive than before. I do wish this would remove the need for adding the “smb://” in order to get into the share. It would be far more user-friendly if all you had to do is click “Connect to server” and then enter the IP addy of the share address. But that's picking at nits.

10. Back to Xorg

Ah the controversy hits a bit of a bump in the road. It was thought that 13.10 would be the first release with the new X server, Xmir. That is not the case. A few nasty issues raised their head (in particular was mult-monitor problems) and so the developers decided to hold off defaulting to Xmir. Personally, I think this was the right call. 13.10 doesn't need the added weight of a new X server. I believe Xmir shouldn't arrive until 14.04 – when it's ready for prime time and not before.

Ubuntu 13.10 should have fans of the distribution excited. But it's not just fans that should pay attention to what Canonical is doing with Ubuntu. Users fed up with the Microsoft platform should take note – Ubuntu Saucy Salamander is polished and easy enough for the average user to enjoy a robust and reliable desktop experience. Hopefully these ten features will pique your interest enough to get you to jump on board.

10 essential LibreOffice Writer tricks by Jack Wallen

I’m a writer of both tech articles and fiction. I depend upon LibreOffice on an hourly basis. Because of this, I have a personal relationship with the word processor piece of the office suite puzzle. Most people use only a small portion of the power of the word processor, but it doesn’t have to be that way. In fact, if average users knew the power they held at their fingertips, they’d be amazed at what they’re missing.

With that in mind, I want to illustrate some handy tips and tricks you can use with LibreOffice Writer. These tips won’t make you a better writer, but they will make the process of writing (in one form or another) easier.
1: Create instant hyperlinks

I wind up placing a lot of hyperlinks within many of the articles I write, which can be quite cumbersome. You type the text, click on the Hyperlink button, enter the URL, click Apply, and click Close. There is a much better way: Use the Hyperlink Bar. To enable this, click View | Toolbars and select Hyperlink Bar. This will open a new toolbar that has two simple text areas. The first (to the far left) is where you enter the text for the hyperlink. The second area is where you enter the URL. Once you’ve entered the URL, press Enter and the link will appear to the right of the current cursor position.
2: Use the Thesaurus

That’s right, LibreOffice comes complete with a handy Thesaurus to use as you write. To open up this this tool, highlight the word you need help with and then hit Ctrl + F7. The LibreOffice Thesaurus will open with suggestions for the word. If you are using a window manager or have created a custom shortcut that uses Ctrl + F7, you can just go to Tools | Language | Thesaurus.
3: Take advantage of autocomplete

You know that tool on your smartphone that mostly just gets in the way of your trying to type? Well, you can enable it in LibreOffice Writer, only it’s not so bad. Click Tools | Autocorrect Options. In the Word Completion tab, make sure the Enable Word Completion option is checked. In that same tab, make sure the check box for Collect Words is selected. With the latter option checked, LibreOffice will record every word you type so autocomplete will have a database of words from which to pull. Now as you type, LibreOffice will complete your words and you just have to hit the Enter key to accept its suggestion.
4: Know your keyboard shortcuts

Here’s the deal. Every application has keyboard shortcuts. Most people know the usual Ctrl + A, Ctrl + V, Ctrl + P shortcuts. But that short list does little in the grand scheme of things. There exists a huge list of preconfigured keyboard shortcuts for LibreOffice. The best way to learn these shortcuts is to click Tools | Customize and then click on the Keyboard tab. There, you can scroll through the complete list of preconfigured keyboard shortcuts. Go through that list and commit to memory those you’ll need to use most.
5: Protect your templates

LibreOffice has a great system for using templates. You can create a collection of them and house them in a shared repository. But when you create them, it’s a good idea to make them read-only and to password protect them. The last thing you want to do is work hard on a template only to have someone overwrite it, causing you to go back to the drawing board. To mark a template as read only and password protect it, open it and click on File | Properties. Click on the Security tab and select Read Only. Now click the Protect button and when the new window opens, enter (and re-enter) the desired password.
6: Create a table of contents

If you’re creating larger documents, you should seriously consider creating a table of contents. It may sound complicated, but it’s not. To add a table of contents, follow these steps:

    Click in the document where you want to create the table of contents.
    Click Insert | Indexes And Tables | Indexes and Tables
    Click the Index/Table tab.
    Select Table of Contents in the Type box.
    Select any options you want and then click OK.

If you later make a change in the document that must be reflected in the table of contents, you must update it by clicking Tools | Update | All Indexes And Tables.
7: Navigate through your document

If you are creating a complex document, you will want to take advantage of the LibreOffice Writer Navigator. This handy tool will allow you to click on an object within your document and immediately zip to that spot. The Navigator includes objects such as headings, tables, text frames, graphics, OLE objects, bookmarks, sections, hyperlinks, references, indexes, and comments. To get to the Navigator click View | Navigator or just hit F5.
8: Perform quick calculations

When a document requires some calculations, there’s no reason to fire up a calculator or open up a spreadsheet. LibreOffice Writer has a formula toolbar that lets you perform calculations from within the word processor. The formula toolbar doesn’t permanently reside in the toolbar section of LibreOffice. Instead, you view it, run your calculation, hit Enter, and the calculation will appear at your cursor. You can do the following calculations: sum, round, percent, square root, power, various operators, and various basic and statistical functions.
9: Dock and undock your toolbars

A really cool feature is the ability to dock and undock toolbars. All toolbars can become undocked windows, which allows you to position them exactly where you want. Here’s how you do it. Find a toolbar you want to serve as an undocked window. Hold down the Ctrl key and then double-click an empty spot on the toolbar. This will undock the bar which can now be moved around like a standard window. Repeat the action to re-dock the bar.
10: Move text efficiently

Writer offers a really great way to copy a block of text to a new location within a document — and you don’t have to do copy/paste. Instead, highlight the text, press and hold the Ctrl key, and then drag the text to wherever you want it. I prefer this method because it is more efficient than the standard copy/paste method.

AT&T chooses Ubuntu Linux instead of Microsoft Windows


(Brian Fagioli@ BetaNews) While Linux's share of the desktop pie is still virtually nonexistent, it owns two arguably more important markets -- servers and smartphones. As PC sales decline dramatically, Android phones are continually a runaway market share leader. In other words, fewer people are buying Windows computers -- and likely spending less time using them -- while everyone and their mother are glued to their phones. And those phones are most likely powered by the Linux kernel.

Speaking of smartphones, one of the largest cellular providers is the venerable AT&T. While it sells many Linux-powered Android devices, it is now embracing the open source kernel in a new way. You see, the company has partnered with Canonical to utilize Ubuntu for cloud, network, and enterprise applications. That's right, AT&T did not choose Microsoft's Windows when exploring options. Canonical will provide continued engineering support too.

John Zannos, Vice President of Cloud Alliances and Business Development at Canonical explains, "this is important for Canonical. AT&T's scalable and open future network utilizes the best of Canonical innovation. AT&T selecting us to support its effort in cloud, enterprise applications and the network provides the opportunity to innovate with AT&T around the next generation of the software-centric network and cloud solutions. Ubuntu is the Operating System of the Cloud and this relationship allows us to bring our engineering expertise around Ubuntu, cloud and open source to AT&T"."By tapping into the latest technologies and open principles, AT&T’s network of the future will deliver what our customers want, when they want it. We're reinventing how we scale by becoming simpler and modular, similar to how applications have evolved in cloud data centers. Open source and OpenStack innovations represent a unique opportunity to meet these requirements and Canonical’s cloud and open source expertise make them a good choice for AT&T", says Toby Ford, Assistant Vice President of Cloud Technology, Strategy and Planning at AT&T.

This is a great example of a technological mutualistic relationship. Obviously, Canonical is the big winner here, as AT&T is a huge partner -- it should inject some much needed money into the growing company. With that said, AT&T is benefiting too -- utilizing Linux and other open source technologies is a smart, cost-effective, way to retain flexibility. In other words, the company is wise to choose Ubuntu.

Active Technologies and Email Policy

There is a 450 outgoing email hourly limit per domain.

If you send more than 450 emails per hour, most of the e-mails will bounce back with an “undeliverable error”. If this occurs, it will then take some time for your account to be able to send emails again. Therefore, we recommend waiting at least 1 hour after this issue occurs to begin sending email again.

Mailing Lists Rules 
1. Email lists must be throttled to a rate of no more than 1 email every 8 seconds. Sending 1 every 8 seconds would send 450 emails within 1 hour, keeping you at or below the 450 outgoing email limit per domain.

If your mailing list software does not allow you to throttle, you must switch to an application or script that will. We recommend PHPList, which can be found in your CPanel, under Quickinstall.

IMPORTANT: If you do not throttle and you try sending 450 emails, the server will try sending all of the emails in 1 single second, which is physically impossible. This will cause a very high load on the server and the entire server will be sluggish, potentially affecting your sites and service, until this sending process is completed. It is our job to keep the server up and running without being sluggish or experiencing issues. Anyone who causes the server's load to go high may have their email account suspended and the process will be terminated.

2. Mailing lists over 900 email addresses is only allowed to be sent to during off-peak hours to prevent high server loads. Off peak times qualify as all day Saturday and Sunday, and 1AM - 8AM Eastern Standard Time, Monday through Friday.

3. All email lists must be a “Double Opt-In list”. This means a user has subscribed for a newsletter or other email marketing messages by explicitly requesting it and confirming the email address to be their own.

Confirmation is usually accomplished by responding to a notification/confirmation email sent to the email address the end user specified. The double opt-in method eliminates the chance of abuse where somebody submits someone else's email address without their knowledge and against their will. You will not be permitted to mail any mailing list that you were given or purchased. In doing so, this will also be considered spamming and may result in just cause for suspension or termination of the offending account.

Email Scripts must be able to handle and document all information from a double opt-in list. This includes recording the sign-up IP address and date/time, double opt-in verification IP address and date/time, processing opt-outs (via web or email), and list removal on bounce backs. All opt-outs or bounce back removals must be handled in a timely manner, and outbound mail must be throttled to a maximum of 450 (four hundred fifty) emails per hour. If your account is found to be using a script that does not meet these requirements, Active Technologies reserves the right to suspend, terminate, or deactivate your script or account.

4. Sending of unsolicited e-mail will result in suspension or termination of the offending account. We take a zero tolerance stance against sending of unsolicited e-mail and other forms of spam.

. Any and all email lists MUST comply with all guidelines set forth by the United States government found at .

6. Active Technologies does not permit direct SMTP mailers such as Darkmailer or The Bat!. All mail should be sent through the local mail server/MTA for further delivery by the server and not directly by scripts.


Add Google Chrome Bookmarks to FireFox

Export bookmarks from Chrome

  1. In the top-right corner of the browser window, click the Chrome menu enter image description here.
  2. Select Bookmarks -> Bookmark Manager.
  3. In the Bookmark Manager menu bar click Organize.
  4. From the dropdown menu select Export bookmarks to HTML file...

Import bookmarks to Firefox from HTML file

  1. Open Firefox.
  2. From the Firefox menu select Bookmarks -> Show All Bookmarks to open the Library window.
    (Sometimes the Bookmark icon is not there. If not, click "Customize" at the bottom of the Firefox menu and add it.)
  3. From the toolbar in the Library window, click Import and Backup and from the dropdown menu choose Import Data from HTML...
  4. In the new Import Bookmarks File window that opens, browse to the location of the HTML file that you exported from Chrome.
  5. Click the Open button.
  6. To rearrange the order in which the bookmarks are displayed select Bookmarks -> Show All Bookmarks to open the Library window and drag the bookmark files and folders to their new positions.

Apache What Is It

The Apache HTTP Server, commonly referred to simply as Apache, is a web server notable for playing a key role in the initial growth of the World Wide Web. Apache was the first viable alternative to the Netscape Communications Corporation web server (currently known as Sun Java System Web Server), and has since evolved to rival other Unix-based web servers in terms of functionality and performance.

We use Apache at,, and The project's name was chosen for two reasons:[1] out of respect for the Native American Indian tribe of Apache (Indé), well-known for their endurance and their skills in warfare,[2] and due to the project's roots as a set of patches to the codebase of NCSA HTTPd 1.3 - making it "a patchy" server.[3]

Apache is developed and maintained by an open community of developers under the auspices of the Apache Software Foundation. The application is available for a wide variety of operating systems, including Unix, FreeBSD, Linux, Solaris, Novell NetWare, Mac OS X, and Microsoft Windows. Released under the Apache License, Apache is free software / open source software. Since April 1996 Apache has been the most popular HTTP server on the World Wide Web; since March 2006 however it has experienced a steady decline of its market share,[4] lost mostly against Microsoft Internet Information Services and the .NET platform used by some large blog providers.[5] As of December 2007 Apache served 49.57% of all websites.[6]


The first version of the Apache web server was created by Robert McCool, who was heavily involved with the National Center for Supercomputing Applications web server, known simply as NCSA HTTPd. When Rob left NCSA in mid-1994, the development of httpd stalled, leaving a variety of patches for improvements circulating through e-mails.

Rob McCool was not alone in his efforts. Several other developers helped form the original "Apache Group": Brian Behlendorf, Roy T. Fielding, Rob Hartill, David Robinson, Cliff Skolnick, Randy Terbush, Robert S. Thau, Andrew Wilson, Eric Hagberg, Frank Peters, and Nicolas Pioch. Version 2 of the Apache server was a substantial re-write of much of the Apache 1.x code, with a strong focus on further modularization and the development of a portability layer, the Apache Portable Runtime. The Apache 2.x core has several major enhancements over Apache 1.x. These include UNIX threading, better support for non-Unix platforms (such as Microsoft Windows), a new Apache API, and IPv6 support.[7] The first alpha release of Apache 2 was in March 2000, with the first general availability release on 6 April 2002.[8] Version 2.2 introduced a new authorization API that allows for more flexibility. It also features improved cache modules and proxy modules.[9]


Apache supports a variety of features, many implemented as compiled modules which extend the core functionality. These can range from server-side programming language support to authentication schemes. Some common language interfaces support mod_perl, mod_python, Tcl,and PHP. Popular authentication modules include mod_access, mod_auth, and mod_digest. A sample of other features include SSL and TLS support (mod_ssl), a proxy module, a useful URL rewriter (also known as a rewrite engine, implemented under mod_rewrite), custom log files (mod_log_config), and filtering support (mod_include and mod_ext_filter).

Popular compression methods on Apache include the external extension module, mod_gzip, implemented to help with reduction of the size (weight) of web pages served over HTTP. Apache logs can be analyzed through a web browser using free scripts such as AWStats/W3Perl or Visitors.

Virtual hosting allows one Apache installation to serve many different actual websites. For example, one machine, with one Apache installation could simultaneously serve,,, etc.

Apache features configurable error messages, DBMS-based authentication databases, and content negotiation. It is also supported by several graphical user interfaces (GUIs) which permit easier, more intuitive configuration of the server.

Apache is primarily used to serve both static content and dynamic Web pages on the World Wide Web. Many web applications are designed expecting the environment and features that Apache provides.

Apache is the web server component of the popular LAMP web server application stack, alongside MySQL, and the PHP/Perl/Python programming languages.

Apache is redistributed as part of various proprietary software packages including the Oracle Database or the IBM WebSphere application server. Mac OS X integrates Apache as its built-in web server and as support for its WebObjects application server. It is also supported in some way by Borland in the Kylix and Delphi development tools. Apache is included with Novell NetWare 6.5, where it is the default web server.

Apache is used for many other tasks where content needs to be made available in a secure and reliable way. One example is sharing files from a personal computer over the Internet. A user who has Apache installed on their desktop can put arbitrary files in the Apache's document root which can then be shared.

Programmers developing web applications often use a locally installed version of Apache in order to preview and test code as it is being developed.

Microsoft Internet Information Services (IIS) is the main competitor to Apache, trailed by Sun Microsystems' Sun Java System Web Server and a host of other applications such as Zeus Web Server. Some of the biggest web sites in the world are run using Apache. Google's search engine front end is based on a modified version of Apache, named Google Web Server (GWS).[10]  Wikimedia projects, including Wikipedia are also run on Apache servers.


Apache2 Change Default Index

Search for "sites-available" and go into the "default" document. In there you will see an entry for Directory. Modify it to look like this:

<Directory /var/www/>
    Options Indexes FollowSymLinks MultiViews
    AllowOverride None
    Order allow,deny
    allow from all

Make the desired change

<Directory /var/www/>
    Options Indexes FollowSymLinks MultiViews
    AllowOverride None
    Order allow,deny
    allow from all
<Directory /var/www/>
    Options Indexes FollowSymLinks MultiViews
    AllowOverride None
    Order allow,deny
    allow from all

Then reload and restart your apache server. Again, not sure about your server, but the command on Debian / Ubuntu is:

sudo service apache2 reload && sudo service apache2 restart

sudo services apache2 reload && sudo services apache2 restart
sudo services apache2 reload && sudo services apache2 restart

Apple Search VS Google Search There Is A Difference

(Thomas Claburn  InformationWeek) Updated Google Search App for iOS is a viable alternative to Apple's Siri.

Google has updated its search app for iPhone and iPad, and based on a limited set of comparable queries, the updated software matches or exceeds Siri, Apple's voice search software of recent model iOS devices.

Version 2.5 of Google Search App answers questions more effectively than its predecessor, thanks to integration with Google's Knowledge Graph, the company's system for assessing the meanings of queries and for associating related facts.

Google's technical improvements in speech recognition are due to the massive amount of anonymized search query data it collects, and to data available on the open Web. Google research scientist Ciprian Chelba notes in a blog post that the company's speech recognition error rate declines as the amount of data available for analysis rises.

I compared Google Search App to Apple's Siri on an iPhone 4S, over Wi-Fi and AT&T 4G, by asking the same query to each. More often than not, both provided useful results. Google's app performed noticeably faster than Siri in several instances. However, Siri's presentation of data was often more appealing.

Query: Who let the dogs out?

Google Search App returned a list of appropriate YouTube videos, atop search results list. Siri made a joke of it, returning spoken and text answer: "Who? Who? Who? Who? Who?"

Score one for Google on accuracy. Score one for Siri on humor.

Query: Where is Paris?

Google Search App returned the spoken answer, "Here is a map of Paris, France." It also showed the map on the phone, atop search links to Wikipedia and Siri returned the spoken and written answer, "Here's Paris," and presented an Apple map.

In this instance, Siri was just as responsive as Google.

Query: Tell me about the Giants.

Google Search App presented its baseball score data from the recent World Series, followed by the usual search results list. Siri offered the spoken and written response, "searching the Web for 'the giants,'" then performed a search using Google.

Siri's results were exactly what the Google Search App had come up with. Why ask Siri when Siri just asks Google?

Query: Show me the closest cafe.

Google Search App responded by voice, "There are several listings for 'cafe' less than a mile away." The app browser page contained a local map topped by a list of nearby cafes, most with the word 'cafe' in their names, along with click-to-call, directions, and website buttons on each listing. Siri responded with the spoken and text answer, "I've found fifteen cafe restaurants fairly close to you. I've sorted them by distance."

Siri's listing was formatted more attractively than Google's, but lacked actionable elements like a click-to-call phone number on the listing page -- tapping on a listing opened a restaurant-specific information card that contained a phone number and website. Neither Siri nor Google took the query literally and returned the one closest cafe. And both failed to return the cafe that was actually closest to me at the time of my query in their respective lists.

Query: What movies are playing nearby?

Google Search App responded by saying, "Movies playing in San Francisco," and presented a scrollable gallery of movie posters with show times at nearby theaters below. Siri responded by saying "I'm looking for movies," followed by "I found quite a number of movies playing nearby today" and an attractively formatted listing designed with a movie marquee motif.

Siri's two-step response was slower than Google's, but the Siri layout was better organized.

Query: What's the price of Google's/Apple's stock?

Google Search App responded by saying the current price of Google's stock and presenting an info card with lots of useful financial information. Asked when the market was closed, Google's software reported after-hours trading. Siri responded just as Google's app did, by reciting the current Apple stock price and presenting a Yahoo Finance chart.

Google did better in this instance. Siri said nothing about after-hours trading and its Yahoo-supplied chart was not interactive like Google's. Tapping on Siri's chart launched the Yahoo-powered iOS Stocks app.

Query: Play the trailer for "The Hobbit."

Google Search App quickly loaded the trailer for the upcoming movie "The Hobbit" at the top of a Web search listing and then opened a separate YouTube tab to start playing the video. Siri responded by asking "which one?" and then listing the "The Hobbit" (2012), its 2013 sequel -- which doesn't yet have an official trailer -- and the 1977 animated version.

Siri deserves credit for realizing there's more than one version of "The Hobbit," but shouldn't have included next year's sequel. Google showed more sense by guessing what I meant.

Query: How do you say "help" in Spanish?

Google Search App responded by saying, "Here is your translation," and then presenting an information card with the word "ayudar." Siri responded, "Let me check that" and then said "I found this," and presented a Wolfram Alpha-supplied page that correctly assessed the question and returned "ayuda" along with a variety of synonyms and related words like "recursos,"evitar" and "asistir." The skeuomorphic design of the page was oddly retro, with the page mocked up to include vertical lines of dot-matrix printer holes.

Siri's answer was more complete, but wasn't served as quickly as Google's. I was surprised that both Apple's and Google's software converted my spoken questions into text with 100% accuracy.

Overall, Siri has an edge on the iPhone because Siri can be launched simply by holding the home button. That process is likely to be quicker than entering a phone passcode and scrolling to the Google Search App to open it.

However, once Google's app is accessible, I found it to be more responsive. Also, I prefer Google's synthesized voice to Siri's. Google's presentation of information could use some improvement.

Read More - Click Here!

Belgian gov't moves toward OpenDocument format By Eric Lai

In another blow to the supremacy of Microsoft's Office franchise, Belgium on Friday became the second governmental body to approve the use of the OpenDocument format as a way to exchange government documents.
By September 2007, all Belgian federal agencies must use software that can read reports, spreadsheets, presentations and other types of data files saved in OpenDocument (ODF), a free XML file format certified as a standard by the International Standards Organization (ISO) last month.

"If the impact analysis shows no adverse impact, ODF might even become the standard exchange format in September 2008," according to information posted on the Web site of the Belgian Government Interoperability Framework. Belgium joins the state of Massachusetts in a symbolic break from Microsoft. Massachusetts plans to make ODF its standard for all official government documents by Jan. 1.
Though it is a small country with just 10 million citizens, Belgium's embrace of ODF could have big ripple effects. Its capital, Brussels, is the headquarters of the European Union, making it an important political center in Europe.

The Danish government voted earlier this month to move to open technology standards, though it has not yet decided whether that would involve ODF, according to sources. The Norwegian government is also considering moving to ODF.

"Given the current favorable attitude of the European Community to open standards, and, in particular, to the way that European governments and initiatives are defining open standards, it is not surprising that a growing number of European countries are moving to adopt ODF, which is conducive to not only proprietary but open-source implementations as well," said Andy Updegrove, an open-source advocate and Boston-based lawyer.
ODF is the default file format in OpenOffice, StarOffice and an increasing number of Web-based word processing and spreadsheet applications. Though only utilized by a small percentage of users worldwide today, ODF is supported by some of the largest technology vendors, including IBM, Sun Microsystems Inc. and Novell Inc. They argue that the format's open, interoperable nature makes it suitable for groups concerned about long-term archiving of files.

ODF's chief opponent is Microsoft Corp., whose market-leading Office suite is used by more than 400 million people worldwide, according to the vendor. By default, Office applications such as Microsoft Word, Excel and PowerPoint save files in proprietary file formats owned by Microsoft, such as .doc, .xls and .ppt, respectively -- although users can also choose open formats such as Text (.txt) and Rich Text Format (.rtf).
Some experts say that users' need to remain compatible with Office formats has been key to maintaining Microsoft's market dominance. Microsoft has argued that ODF stifles its own innovation and limits customer choice. It is developing a competing format, OpenXML, that will debut in its forthcoming Office 2007. Microsoft has garnered the support of other vendors for OpenXML and is applying to ECMA International, a rival standards body to ISO, for certification as an open standard.

"We understand the Belgian government's desire to support open-standard document formats and understand that other formats can qualify over time," said Alan Yates, Microsoft's general manager for information worker business strategy. "We believe that OpenXML formats will meet the government's criteria shortly."
In Massachusetts, the controversy that followed last year's decision to adopt ODF led to the resignation of then state CIO Peter Quinn. His successor, Louis Gutierrez, last month put out a search for third-party plug-in software that would enable Office users to easily read and save files in the OpenDocument format.
Massachusetts is now testing those plug-ins, according to Gary Edwards, a software developer who has created plug-ins for Word and Excel and is working on one for PowerPoint. Edwards said his plug-in allows Office users save files in the OpenDocument format without losing formatting information and can open those files up to 25% faster than equivalent Office files.

Such plug-ins could enable Massachusetts to continue using Office in the short run or indefinitely while fulfilling its requirement to move to ODF, Gutierrez said. It is unclear whether such plug-ins, combined with Microsoft Office, would meet the mandate from the Belgian government.

Best approach to moving to the cloud

What's the best approach to moving to the cloud? According to Rob Howe, IT director at Guinness World Records (GWR), it's to tread very carefully. Rather than rushing into cloud-led digital transformation, he believes CIOs should evaluate workloads, find partners and -- only then -- think about how on-demand IT can help deliver innovative services to customers.

"Cherry pick the key elements -- understand the changes you need to make to the infrastructure, and the behavior of software, when you're moving over," says Howe. "Be aware of the differences and account for them. Not all systems will move easily. The cloud should not be viewed as a one-size-fits-all solution to your business challenges."

Howe joined GWR in May 2012, but he didn't alter anything in terms of IT during his first six months at the firm. "In fact, I didn't even plan anything," he says. "I just sat with the teams across the business and understood how they were working. You can't implement changes without understanding the day-to-day workloads of people across the business."

This careful strategy has produced great results, says Howe. He advises other CIOs to take a similarly integrated approach to digital transformation. "If you're not involved in the operational side of the organizations, then you're probably going to fall over something further down the line," he says.

By working alongside the rest of the business, Howe has led a staged approach to digital transformation that has included the implementation of a records management platform from SDL, a digital asset management system from Asset Bank, and Salesforce CRM technology.

As a final stage of the transformation process, GWR chose Ensono to manage the migration of its business-critical IT architecture to an Amazon Web Services (AWS) platform. The project ensures the company's IT system can manage the ongoing transformation of its business from a publishing organisation to a digital media agency.

Howe says his team started the transition to the cloud in February. The full move will be complete by the end of this year. He says the main challenge has been to co-ordinate a range of partners. As well as key partner Ensono, GWR has drawn on eight other providers during the transition, including SDL and Asset Bank.

"Some of these providers have to redesign their platforms to work more effectively on AWS," says Howe. The result, however, is GWR benefits from an agile base for service delivery and can operate in a cloud-first manner. "We can deliver features to our end-customers more effectively and it provides more flexibility to our business," he says.

Howe says his IT department tries to be at least alongside, but preferably in front, of the rest of the organization. "We look at the business plan, make predictions and try and get ahead," he says. "Our aim is to create flexible solutions that allow the rest of the business to focus on their main activities, rather than having to wait months while we set up the technology."

The cloud plays a crucial role in this approach.

"It allows us to provide a higher level of service across our public web sites," he says. "And it allows us to run projects that might have a finite life. If there's a multimedia project being run by a team, we can spin up isolated areas and power them down when they're finished, instead of committing to infrastructure spend."

That ability to scale is critical to the organization. Six years ago, the business had clear peaks in traffic -- the launch of its world-famous book of records every September and Guinness World Records Day in November. Today, GWR is less reliant on publishing and operates more like a digital consultancy and its traffic peaks are unpredictable. Howe gives an example.

"On the first day we went live with the new AWS infrastructure, there was a press release for the largest unlimited wave surfed by a woman," he says. "It was huge news in the surfing community and within a few hours we'd received four times our normal daily web traffic. Yet we were able to meet that demand comfortably by just turning on the auto-scaling capability of the cloud."

As well as scalability and flexibility, Howe says the cloud provides other benefits. "It allows us to be more dynamic as a team and to think more carefully about where we should focus our attention," he says. "It gives us better transparency in terms of costs, too."

Howe aims to use the cloud as a platform for further innovation. He says the next step is to convert GWR's application programming interface (API) layer to microservices. Then he and his team will start to think about what type of data should be pushed out to edge locations.

GWR, therefore, has created a solid platform for continued digital transformation. Howe looks back on the changes he's made and says the key best-practice lesson for other CIOs considering a move to the cloud is to focus on planning. "Make sure you and your organization understand the changes you're going to be going through," he says.

Gregor Petri, research vice president at Gartner, says the analyst firm is seeing evidence of cloud vendors providing more options to CIOs. Rather than only offering their own services, increased numbers of vendors are recognizing the power of hyper scale cloud specialists, such as Microsoft, Google and AWS, and offering to run their services on these giants' platforms.

That's an approach that's familiar to Howe. Ensono, rather than pushing a move to the newest version of its private cloud, suggested the best answer was for GWR to shift its infrastructure to AWS. He says the approach has paid dividends for GWR and he suggests other CIOs should consider similar tie-ups.

"Find a partner -- don't try and do everything yourself," he says. "And don't just lift and shift -- as part of the research that I did prior to move, I found all the people who were struggling that had moved to the cloud were people who'd just literally taken their existing infrastructure, replicated it and had found a bunch of problems."

China Creating Home-Grown Operating System

Chinese and US flags (BBCnews) The operating system will be tailored to Chinese calendars and character sets. China is working with software firm Canonical on an open-source operating system customised for Chinese users.

The collaboration will produce a version of Canonical's Ubuntu operating system called Kylin which will be released in April.

The deal is part of a five-year plan by China to get more people using open source software.

This software gives people more access to its internal workings so they can modify it themselves.

The first version of Ubuntu Kylin is intended for desktop and laptop computers. As well as using Chinese character sets, Kylin will also do more to support the way Chinese people interact with computers as well as reflect China's date conventions.

Future versions will include tools that let people use popular Chinese web services such as Baidu maps, the Taobao shopping service as well as versions of office programs and image management tools, directly from Ubuntu's main screen.

The code will be created at a laboratory in Beijing staffed by engineers from Canonical as well as several Chinese R&D agencies.

Canonical is also working with the Chinese Ministry of Industry and Information Technology on a version of Kylin that will run on servers so websites, online shops and hosting firms can adopt the software.

The move is widely seen as an attempt by China to wean its IT sector off Western software in favour of more home-grown alternatives.

Ubuntu is based on the Linux operating system and its development and use is governed by an open ethic that emphasises the sharing of core computer code. It stands in contrast to the closed or proprietary systems of Microsoft and Apple who restrict access to the core or source code for their operating systems.

Chrome OS Cloak of Unhackability

(Katherine Noyes @ LinuxInsider) "Hackers have a hard enough time with a full version of Linux, let alone a pared down version with only a secured browser running as the interface," said Linux Rants blogger Mike Stone. "All the potential options from Linux? They are gone. The hackers couldn't get in when they were there -- they have no hope of getting in now."

Once upon a time there was a modest young operating system named "Chrome OS."

It tried to live a quiet life helping others, but its ancient roots made some in the mainstream computing world wary. Not only was it one of the first examples of a new type of operating system, focused as it was on the browser, but it was also descended from Linux, the very name of which was still widely misunderstood among the masses.

One day, however, young Chrome OS was given a chance to prove itself. In a contest previously focused on its browser cousins, Chrome OS was invited to compete against the world's toughest hackers in the illustrious Pwnium 3 competition.

No Winning Entries

Linux Girl

Naysayers laughed as the hackers rolled up their sleeves, but Chrome OS stood firm, secure in all the gifts it had inherited from its forebears.

The crowds watched with breathless anticipation as the hackers threw their deadliest weapons, but little Chrome OS remained standing through onslaught after onslaught. By the end of the day, when it emerged unscathed from a field of felled competitors, it was clear Chrome OS had inherited its ancestors' greatest treasure of all: the Cloak of Unhackability.

"We just closed out the competition," confirmed the final announcement on Google+. "We did not receive any winning entries but we are evaluating some work that may qualify as partial exploits. Thanks to those who attempted, see you next time!"

All eyes in the crowd turned with new respect to the young Chrome OS.

'I Applaud Chrome OS'

Now, amid the seemingly endless stream of Chromebooks that continue to arrive on the market, Linux bloggers have had little else but this epic tale on their minds.

"I applaud Chrome OS," wrote RNR19952 on PCWorld, for example.

"I'd pay (US)$40 to just be able to install Chrome OS on my existing Samsung laptop... especially in a dual-boot environment," wrote VanceVEP72.



'3 Cheers for the Chromebook!'

"I am a convert," admitted RobieJay. "I have been using the 2GB Samsung C500CE since August 2012."

And again: "I've used about every system out there since 1980. In my humble opinion, ACER and GOOGLE got it right with their Chromebook," agreed Cumbey. "My Chromebook is simplicity itself and just plain joy to use. Three cheers for the Chromebook!"

Down at the Linux blogosphere's Broken Windows Lounge, patrons have had no shortage of their own thoughts to share.

'Fish in a Barrel'

"Saw this coming a mile away," offered Linux Rants blogger Mike Stone, for example.

"I expect we'll see Chrome OS appear in a couple more Pwniums at max before they're removed from the docket because no one even tries to crack them," he predicted.

"There are a whole lot easier targets in Windows and OS X," Stone concluded. "It's like one really fast and smart fish in the Ocean, and a whole bunch of really stupid fish in a barrel."

'It May Take Longer'

On the other hand, "I am a little surprised," Google+ blogger Kevin O'Brien told Linux Girl.

"I expect Linux to be more secure, but it is a truism that any computer can be defeated with a sufficiently ingenious attack, and there are some smart people competing in this," O'Brien explained.

"I would bet that by next year it will get pwned, but it may take longer than other, less secure operating systems," he added.

'It Was Inevitable'

A big part of Chrome OS's advantage is likely "due to the fact that Google actively rewards bug hunters throughout the year rather than just at a single event," consultant and Slashdot blogger Gerhard Mack suggested.

Similarly, "the simpler the software, the less likely that an unnoticed security exploit was overlooked during review," offered Robin Lim, a lawyer and blogger on Mobile Raptor.

"While I am not taking anything away from the excellent work of the Chrome OS team, it was inevitable that the simplest OS had the best chance of emerging unscathed," Lim pointed out.

'Far Easier to Secure'

Last but not least, "ChromeOS did survive pwnium, but Google cheated a bit by releasing fixes immediately before the event," blogger Robert Pogson pointed out. "All's well in love and war, according to M$, so they shouldn't mind."

Still, "Chromium OS is a good idea," Pogson opined. "An OS with minimum capability is far easier to secure than one unlocked and able to do more general tasks. When you want security, Chromium OS or a good thin client is the way to go."

In organizations with large numbers of seats, meanwhile, "that other OS's requirement to add a server to the mix also increases insecurity," he concluded. "Pwnium did not test that, but I have seen a lot of malware spread from one machine to the next by M$'s weak networking skills."

Chromebook Makes A great second computer

Take a moment to think about how computers are used in your home. How much of that time do you spend browsing the web, working on word processing documents or presentations and checking email and social networks? If your answer is a good chunk of the time, you may be a candidate for a Chromebook computer.

Chromebooks run Google's Chrome OS, which looks like the Chrome Web browser but runs apps as well. In fact, there's a whole ecosystem of Chrome apps available through the Chrome Web Store. There are games, like Angry Birds Heikki, Battlefield and Need for Speed World; productivity tools, including Dropbox, Picasa and Evernote; and, of course there are the Google apps, like Google Docs, Gmail and Google Maps. Currently, there are tens of thousands of apps available through the Chrome Web Store—some that are primarily web-based and some that run within a browser tab, but have been downloaded and work offline.

Chromebooks are best for people who always have access to an Internet connection. That's because many of the apps are built to run online, though staples like word processing and mail will work offline as well. And, you'll be storing most of your documents online, which is fine thanks to the 100GB or more of free online storage that comes with all models.

Google has taken advantage of the way Chrome OS works to bring a high level of security to Chromebooks. The OS is automatically updated, so security fixes are automatically installed. Chrome OS treats each tab in the browser as a sandbox, so if malware is encountered, it can't leave. And, each time the system starts, it does a self-check and makes any necessary repairs. In fact, Google is so confident in the security of Chrome OS that the company is hosting a hack-a-thon and awarding anyone who "breaks in" a prize of $150,000.

The other selling point of Chromebooks is their ultra-low price point. Aside from the ultra-sleek, touchscreen Chromebook Pixel ($1,299) introduced earlier this week, models range from $199 to $329. Usually, when you see computers priced this low, it's a red flag for cheap, chunky construction, unresponsive programs and tiny keyboards and displays. Not so with Chromebooks. The svelte 11.6-inch Samsung Chromebook ($249) is just 0.7 inches thick, with sleek styling (albeit in plastic), and easily handles the Chrome apps. Likewise, HP Pavilion Chromebook ($329) tackles apps efficiently, but with a roomier 14-inch display.

Naturally, Chromebooks are an attractive option for parents to buy for their kids. However, there are no built-in parental controls—though they're in the works. And, technically, kids aren't supposed to have Google accounts (at least those under 13), unless they've received them through their school via Google Education initiative. So you'll have to share your account, when setting up a Chromebook for a pre-teen.

For many people, a Chromebook can't be their only computer. Either they don't have constant Internet access or they need to use software that not available in the Chrome Web Store, like Photoshop or Skype. You may also have to invest in a Cloud Ready printer, since you can't hook up just any printer up to your Chromebook. As a second computer, it's a great low-cost choice.

Suzanne Kantra is co-founder and Editor-in-Chief of Techlicious. Email her at

Clean URLs and WP Permalinks Ubuntu 12.04 Apache2

Apache2 a2enmod rewrite on Ubuntu 12.04 and .htaccess for WordPress Permalinks

When I was setting WordPress up in order to make this blog work, I was changing the Settings>Permalinks section to make the post url look pretty. So I changed the “Common Settings” to “Day and Name” and saved.

When you do that WordPress writes the .htaccess file on the /var/www directory.

Ok, enough introductory small talk! What happens is that the above feature requires that the mod_rewrite (or simply rewrite) module is installed and enabled. Fortunately, it comes pre-installed on Ubuntu 12.04.1, and you just need to issue the

$ sudo a2enmod rewrite

command and that should work, right?

Well, not really… Even though the issued command creates the symlink for the rewrite.load file at /etc/apache2/mods-enabled, you still have to manually edit your /etc/apache2/sites-enabled/000-default file and change the AllowOverride directive from None to All at the /var/www Directory section.

So the section that looks like:

<Directory /var/www/>
    Options Indexes FollowSymLinks MultiViews
    AllowOverride None # change this line
    Order allow,deny
    allow from all

Should look like:

<Directory /var/www/>
    Options Indexes FollowSymLinks MultiViews
    AllowOverride All # now it will read .htaccess
    Order allow,deny
    allow from all

Restart apache and everything should work. By changing the AllowOverride directive to All you’re telling Apache to read and load the configuration found at the .htaccess file on that Directory. If it is set to None, the .htaccess file is ignored.

Cloud-based Apache OpenOffice


(Nick Heath @ ZDNET) A cloud version of OpenOffice writer, AOO's document editor, was shown at a presentation at the ApacheCon Europe event in Germany this week.

The presentation, by IBM Symphony Documents team members who work on the AOO project, said having a platform-independent, cloud-based deployment would allow AOO to be used on mobile devices - massively increasing the potential user base of the productivity suite.

Implementing this HTML5-based version of AOO would come at a "small" cost to the development of AOO, the presentation added.

The prototype version of cloud AOO runs from within a web browser. The browser connects to a remote server running a "headless" instance of AOO that listens to the actions the user performs and returns XML snippets that are used to render the OpenOffice GUI.

Future enhancements to the system could feature an adaptive UI, which would align its style with that of the mobile OS being used.

The Apache Software Foundation has said the code for cloud AOO is at an early stage. The team are reportedly facing challenges in ensuring smooth rendering of the GUI across different browsers and in working with the fact that OpenOffice processes can't be shared among users.

The team working on cloud AOO has previously said it hopes the SaaS spin-off would allow AOO to compete with Microsoft in the cloud services space, where Microsoft has its Office 365 offering.

Another talk at ApacheCon demonstrated how to integrate Apache OpenOffice with OpenSocial, the open APIs that allow developers to access core functions and information at social networks.

The team working on AOO plans further integration with standards for sharing online information, such as CMIS, OpenSocial and OData, in the releases of AOO due out next year.

Read More - Click Here!

Connecting Motorola Razr M To Ubuntu 12.04 with MTP

How to setup MTP under Ubuntu

This guide was created using information taken from:

and others. They show you how to determoine all the id information required to set up mtp mounting. This guide will just focus in on what you need to get your Asus TF700 connected.


Latest Android devices are using 2 different USB connexion modes : PTP and MTP.

With PTP mode, the tablet is seen as a digital camera. This is working Out Of The Box under Ubuntu 12.04. But, you can only access DCIM directory of your device.

With MTP mode, the tablet is seen as a multimedia device where you can access the complete exported filesystem. But it doesn't work Out Of The Box under Ubuntu. It needs some setup and configuration to work.

Note: Gvfs has been updated in Ubuntu 13.04 Raring Ringtail, bringing a new MTP backend which allows users to access Android 4.0 devices which do not support the USB Mass Storage. So this should work out of the box with Raring.

Even if you can access easily a MTP device thru a program called gmtp, it is not as convenient as a conventional USB mass storage, accessible directly from Nautilus.

This guide explains how to configure your Ubuntu computer to directly access your Asus TF700, exported filesystem in MTP mode as soon as you plug it to a USB port.

Thanks to some udev rules, your tablet will automatically mount when device is plugged in & unmount when device is unplugged.

This guide has been done for Asus TF700, but it should work with any other MTP device (Android phone or tablet, MP4 player, ...) if you adapt the Udev rules according to your device.

The Basic steps are:

1) Allow non root users to access root mounted filesystems
2) Install go_mtpfs
3) Configure udev rules to mount and unmount Android filesystems

Configuring FUSE:

Ubuntu doens't allow normal users to access fuse configuration file. This is a bug, which is quite easy to correct by giving read attribute to /etc/fuse.conf.

By default, fuse does not allow mounted filesystems to be accessed by anybody else that the user who has mounted it. As MTP filesystem will be mounted by root to be used by any user, we need to modify fuse default behaviour to allow it. This is configured by uncommenting the user_allow_other key in /etc/fuse.conf.

# sudo chmod a+r /etc/fuse.conf
# sudo gedit /etc/fuse.conf

# Allow non-root users to specify the 'allow_other' or 'allow_root' mount options.

Install go-mtpfs:

There are various programs to add support for mtp to Ubuntu. After trying a number of them I have found go-mtpfs works best for me. You certainly could install a different program and just adjust the guide to suit. Go-mtpfs is available in the WebUpd8 Unstable PPA, for Ubuntu 13.04, 12.10 and 12.04. Add the PPA and install it using the following commands:

# sudo add-apt-repository ppa:webupd8team/unstable
# sudo apt-get update
# sudo apt-get install go-mtpfs

If you are not happy with installing prebuild binaries from repositories, you can easily get the source from github and build it yourself.

Once that is done you need to create a mount point for your Android filesystems. I have chosen TF700 under /media. This mount point should be accessible by anybody as you will later mount the device using your user account.

# sudo mkdir /media/TF700
# sudo chmod 777 /media/TF700

Add the udev rules:

This is the core of making the auto mounting work. The first link I reference has all the information on how you discover the Vendor and Product ids. While it is interesting reading, I have just skipped over all that and supplied the TF700 specific values. If you are trying to setup another device you WILL NEED TO read it and get the appropriate values, as they are device specific.

# sudo gedit /etc/udev/rules.d/51-android.rules
# Asus TF700
SUBSYSTEM=="usb", SYSFS{idVendor}=="0b05", ATTR{idProduct}=="4c91", MODE="0666" #MTP media, USB debug on

# Asus TF700 MTP mode under JB 4.2+ : automatic mount & umount when plugged (USB debug on & off)
ENV{ID_MODEL}=="ASUS_Transformer_Pad_TF700T", ENV{ID_MODEL_ID}=="4c91", ACTION=="add", RUN+="/usr/bin/sudo -b -u YOUR_LOGIN /usr/bin/go-mtpfs -allow-other=true /media/TF700"
ENV{ID_MODEL}=="ASUS_Transformer_Pad_TF700T", ENV{ID_MODEL_ID}=="4c91", ACTION=="remove", RUN+="/bin/umount /media/TF700"

You need to change "YOUR_LOGIN" to your username.

Last thing is to restart udev for the new rules to become operationnal.

# sudo service udev restart

Declaration in /etc/fstab:

At this stage, you are able to automatically mount and unmount your TF700. Your device should appear after few seconds in Nautilus computer section.

You can now browse your device straight from Nautilus. You can copy files from and to your TF700, you can rename files, ...

You have to know that you will face some limitations as MTP is not a real filesystem protocol :

* you can not copy files bigger than 2 Gb,
* you can not create empty files,
* you can not move file within the phone with drag & drop
* you cannot open files in write mode directly from TF700

Another problem is that you can not unmount your TF700 straight from Nautilus. If you click on the Eject button, you will get an error message saying :

umount: /media/TF700 is not in the fstab (and you are not root)

To solve that, it has to be declared in /etc/fstab with its fuse caracteristics.

As /etc/fstab is using SPACE as a separator and as our TF700 fuse caracteristics include SPACE caracter, we have to convert them to its octal escape code \040.

# sudo gedit /etc/fstab

DeviceFs(ASUS\040Transf)    /media/TF700   fuse.DeviceFs(ASUS\040Transf)    allow_other,rw,user,noauto    0    0

When you will remount it in MTP mode, your TF700 will be seen as a device. In Nautilus, it appears in the device section and you are now able to Eject it ...

Convert Old Kindle Fire into a Nexus 7 running Android 4.2.1 Jelly Bean


Amazon Kindle Fire

Kindle Fire Nexus 7 Hack

() Have an old Kindle Fire lying around? Don’t trade it in just yet because with a little bit of tinkering, you can turn it into what amounts to a Google (GOOG) Nexus 7. XDA-Developers user “Hashcode” has written up instructions on how to install Android 4.2.1 on an original Kindle Fire with almost every feature intact. If you can live without the microphone (sound still works), deep sleep mode, Swype keyboard, multi-user profiles and USB camera support, then you’re good to go. All of the major tablet features including hardware-accelerated HD video for YouTube and Netflix (NFLX) work smoothly, and Liliputing’s hands-on video suggests the transformation works really well for browsing and games. The only downside is the battery life is not very good. True, you won’t get the Nexus 7′s higher-resolution display or sleeker design either, but it’s still a handy way to repurpose an old tablet.

[More from BGR: New BlackBerry 10 images show off home screen UI, notifications and key apps]

This article was originally published by BGR


Create Ubuntu Custom Installation Disk

How to Customize an Ubuntu Installation Disc – The Right Way (14.04 Compatible!)

If you’re like me, you’ve wanted to customize an Ubuntu install DVD for a long time – but all the tools/directions for doing it are out of date and/or broken. Look no further!

I have successfully customized an ISO of Xubuntu 14.04 for my project Builduntubut this guide should work for just about any flavor of Ubuntu, maybe even other Linux distributions. If you aren’t sure, give it a try! Mint and Debian are very similar and may work with minimal changes to the commands (ie, replace apt-get with whatever package manager the particular distro uses). It helps if the distro you want to customize is the same as what you are running currently, but is not necessary.


Let’s get to it.

First download the ISO you’d like to start customizing from Ubuntu’s release server here. Remember where you save it, because you’re going to have to move it in a minute.

From here on out, it’s bash command line. Don’t worry, it’s the easiest way of doing this (for now). You don’t need to be a Linux guru, just pay close attention to the directions and it will work fine.

Make sure the prerequisite software is installed for unpacking and repacking the image. Open a terminal and run:

sudo apt-get install squashfs-tools genisoimage

Create a fresh folder to begin work. For the purposes of this guide, everything will be done from the starting point of the user’s home directory (indicated in Linux by a tilde “~”). Approximately 10 gigabytes total of free hard drive space is required for decompressing the ISO filesystem and repackaging it at the end.

mkdir ~/custom-img

Move the base ISO downloaded in the first step to the working directory. From here on out, replace “ubuntu.iso” with the name of the image downloaded from the Ubuntu Release server ex. trusty-desktop-amd64.iso

mv /path/to/saved/ubuntu.iso ~/custom-img
cd ~/custom-img

Next, extract the contents of disc image.

mkdir mnt
sudo mount -o loop ubuntu.iso mnt
mkdir extract
sudo rsync --exclude=/casper/filesystem.squashfs -a mnt/ extract

Here’s where things start to get interesting. Extract the filesystem with the following commands:

sudo unsquashfs mnt/casper/filesystem.squashfs
sudo mv squashfs-root edit

You’re going to need network access from within the chroot environment to download and install updated/new packages. Essentially what’s happening is you are going to “log in” to a command line instance of the Ubuntu installation, separate from the host system. Perhaps a confusing concept to wrap your head around at first, but it makes sense when you think about it. Copy resolv.conf from your system into the freshly unpacked fs.

sudo cp /etc/resolv.conf edit/etc/

Mount a few important working directories:

sudo mount --bind /dev/ edit/dev
sudo chroot edit
mount -t proc none /proc
mount -t sysfs none /sys
mount -t devpts none /dev/pts

Now you are actually logged in to the installation instance as root. Neat. Before making changes, a few commands will make sure that everything goes smoothly while modifying packages.

export HOME=/root
export LC_ALL=C
dbus-uuidgen > /var/lib/dbus/machine-id
dpkg-divert --local --rename --add /sbin/initctl
ln -s /bin/true /sbin/initctl

OK, now you can start playing around. This guide is only going to cover adding and removing software, but it’s possible to customize just about anything. Things like custom backgrounds and settings are already documented elsewhere, but be careful! Many of the directions are outdated and the commands may need slight alterations to work correctly. I had to piece this guide together from a few different sources with a whole lot of dead reckoning.

Start by removing the packages you don’t want. Be sure to use the “purge” command so that the system will automatically uninstall and delete the package, which optimizes the space required for the ISO. When you execute purge, read the list of programs to be removed before you select “Y” and make absolutely sure you haven’t accidentally flagged a core system package via association. You will recognize this because the list will contain significantly more packages than those you selected.

apt-get purge package1 package2 package3

I personally remove games, scanning utilities (I don’t have a scanner) and default text editors like abiword and mousepad (geany is the best). Stay away from core components unless you know what you are doing.

Since I am customizing a 64-bit Ubuntu image, I need multiarch (i386) supportfor some of the programming libraries. The following command is not necessary for everyone, but I recommend it anyway.

dpkg --add-architecture i386

Update the software repositories and upgrade the remaining packages on the system.

apt-get update && apt-get upgrade

Add packages to the system the usual way:

apt-get install package1 package2 package3

You are almost there! Time to clean up:

apt-get autoremove && apt-get autoclean
rm -rf /tmp/* ~/.bash_history
rm /var/lib/dbus/machine-id
rm /sbin/initctl
dpkg-divert --rename --remove /sbin/initctl

Unmount the directories from the beginning of this guide:

umount /proc || umount -lf /proc
umount /sys
umount /dev/pts
sudo umount edit/dev

You have now “logged out” of the installation environment and are “back” on the host system. These final steps will actually produce the ISO. Other guides stop working at this point, but have no fear! The following commands have been tested and verified.

Generate a new file manifest:

sudo chmod +w extract/casper/filesystem.manifest

sudo chroot edit dpkg-query -W –showformat=’${Package} ${Version}\n’ | sudo tee extract/casper/filesystem.manifest

sudo cp extract/casper/filesystem.manifest extract/casper/filesystem.manifest-desktop

sudo sed -i ‘/ubiquity/d’ extract/casper/filesystem.manifest-desktop

sudo sed -i ‘/casper/d’ extract/casper/filesystem.manifest-desktop

Compress the filesystem:

sudo mksquashfs edit extract/casper/filesystem.squashfs -b 1048576

Update filesystem size (needed by the installer):

printf $(sudo du -sx --block-size=1 edit | cut -f1) | sudo tee extract/casper/filesystem.size

Delete the old md5sum:

cd extract
sudo rm md5sum.txt

…and generate a fresh one: (single command, copy and paste in one piece)

find -type f -print0 | sudo xargs -0 md5sum | grep -v isolinux/ | sudo tee md5sum.txt

And finally, create the ISO. This is a single long command, be sure to copy and paste it in one piece and don’t forget the period at the end, it’s important:

sudo mkisofs -D -r -V "$IMAGE_NAME" -cache-inodes -J -l -b isolinux/isolinux.bin -c isolinux/ -no-emul-boot -boot-load-size 4 -boot-info-table -o ../name-of-your-custom-image.iso .

It takes a few minutes, but when that is done you will have a burnable/distributable ISO in your working directory (~/custom-img)

Have fun and good luck! Let me know how customizing works out for you!

If you appreciate my hard work, feel free to buy me a coconut water ( donate ). It will be greatly appreciated!

Debian Goodies

Package: debian-goodies (0.63)


Small toolbox-style utilities for Debian systems

These programs are designed to integrate with standard shell tools, extending them to operate on the Debian packaging system.


 dgrep  - Search all files in specified packages for a regex
 dglob  - Generate a list of package names which match a pattern

These are also included, because they are useful and don't justify their own packages:


 debget             - Fetch a .deb for a package in APT's database
 dpigs              - Show which installed packages occupy the most space
 debman             - Easily view man pages from a binary .deb without
 debmany            - Select manpages of installed or uninstalled packages
 checkrestart       - Help to find and restart processes which are using old
                      versions of upgraded files (such as libraries)
 popbugs            - Display a customized release-critical bug list based
                      on packages you use (using popularity-contest data)
 which-pkg-broke    - find which package might have broken another
 check-enhancements - find packages which enhance installed packages

Debian Linux is Different 'Cause...

As the Debian GNU/Linux project marks 15 years of existence, how much has it diverged from

the intentions with which it set sail? As times change and people correspondingly change,

motivating factors often tend to change and this is reflected in changes in most software projects.

Is this true for Debian?

If one goes back to the original manifesto , issued by founder Ian Murdock in 1993, and last

revised in June 1994, one sees these words: \"Many distributions have started out as fairly good

systems, but as time passes attention to maintaining the distribution becomes a secondary

concern.\" (Those uncomfortable with the word manifesto can use the weasel word vision


Murdock\'s reference was to Soft Landing Systems Linux, which was, at that time, the best known

commercial distribution. The Slackware Linux project had kicked off too, but according to its

founder Patrick Volkerding, \"Slackware started in early 1993, but it wasn\'t until the middle of 1994

that I was contacted by Michael Johnston of Morse Telecommunications and asked if I was

interested in having them publish Slackware commercially.\"

Arguably, the problem of maintaining a system has been addressed better by Debian than any

other distribution. There are various package managment programs - dpkg and apt to name two -

which provide a way to install packages and keep one\'s system updated using a number of


There are other tools such as aptitude and synaptic, the latter providing a graphical interface.

These applications provide varying functionality for managing one\'s system.

Another statement which Murdock made about Debian was that it was \"an attempt to create a

non-commercial distribution that will be able to effectively compete in the commercial market\".

Fifteen years on, it is doubtful whether anyone will be able to contest this statement. Debian has

proved its merits to the extent that it has been the base for the most successful desktop

distribution - Ubuntu - put out by a commercial entity.

It is also the base for the best live distribution - Knoppix.

Can a distribution that serves as the base for others to profit keep its own userbase? Most

definitely - indeed, in many cases, people have begun using Linux by installing Debian

derivatives. They then find that the old man is much better than the children and switch to Debian,

simply because it encapsulates many of the pluses that one cannot find in newer distributions.

Department Of Defence (DoD) Officially Adopted Open Source

What does this mean to the open source software movement?

It’s been no secret that the DoD has considered open source for a long time. Consideration has finally reached adoption. The proof? A new site, based on,, will serve as a repository for open source, defense-related software.

Anyone can join the site - so long as they have a DOD, CAC, or ECA certificate. The site currently only contains information and no code. The site itself is nothing more than the Sourceforge code updated to meet DoD standards.

According to a writer on Slashdot, anyone will have access to the code on the site. I have yet to find any validation to this claim. There are currently only three projects on the site. One of those projects, Bastille,  aims to aid in the automation of server configuration. Another project manages requests for proposals. The final project currently on the site automates the secure configuration of Solaris systems. DoD administrators predict there will be 20 projects on the site in next six months.

Of course, this is fantastic news for open source. What this does is validate, without question, the legitimacy of the open source model. But there is one issue I would like to bring up with regards to this project. I understand this is the DoD we’re talking about, so keeping this software out of the hands of the general public seems understandable. But if, in fact, the general public does not have access to the code, is this really open source? Or is this the DoD taking advantage of the nature of open source?

This all comes, of course, on the heels of newly inaugurated U.S. President Obama who has promised an open U.S. government. So under an Obama presidency we could enjoy open source software used government-wide and the DoD embracing open source for defense software. How does this scenario play out in your imagination? In mine it plays out with open source software gaining serious traction in a country where it has had problems finding acceptance. When the government and the DoD sees that open source is a viable solution more and more corporate HQs will have less and less reason to not accept open source.

Deploy your own Open Source Cloud with Debian Linux

The Debian Project produces an entirely Free operating system that empowers its users to be in control of the software running their computers. These days, more and more computing is being moved away from user computers to the so-called cloud – a vague term often used to refer to Software as a Service (SaaS) offerings. We are concerned that, without the needed care, this trend might put in jeopardy most of the freedoms that users enjoy when running (on their computers) software that is Free according to the Debian Free Software Guidelines.

We encourage Debian users to prefer cloud offerings where the SaaS infrastructure is entirely made of Free Software and can be run under their control. We encourage Debian users to deploy their own clouds, as the ultimate way to retain all the freedoms that Debian stands for.

To help our users with these tasks, we are proud to announce the availability of several new technologies that would ease the deployment of Debian-based clouds. Starting with the forthcoming release of Debian 7.0 Wheezy, users will find ready-to-use packages for OpenStack® and Xen Cloud Platform (XCP).
OpenStack, the open source cloud OS, has been created to drive industry standards and end cloud lock-in. OpenStack is a common, open platform for both public and private clouds with the support of more than 2,600 global project participant and over 150 industry leading companies. The open source cloud operating system enables businesses to manage computers, storage, and networking resources via a self-service portal and APIs on standard hardware at massive scale. To find out more about OpenStack, you can visit the official website.
The Xen Cloud Platform (XCP) is a Free Software project that is hosted by that delivers an enterprise-ready server virtualization and cloud computing platform. XCP integrates with the following cloud orchestration stacks: CloudStack, OpenNebula and OpenStack. To find out more about XCP, you can visit the website of the project.

The work to finalize Debian 7.0 Wheezy is still ongoing, but packages of the above technologies are already available as part of our testing release. We encourage interested users to test them. In particular:

  • You can set up a minimal but fully functional OpenStack cluster using two computers by following the HOWTO on the Debian wiki
  • You can test XCP installation and deployment by installing the xcp-xapi package and following the instructions in its README.Debian file
  • You can test OpenStack using XCP by installing nova-xcp-plugins in your XCP server, and following the instructions in its README.xcp_and_OpenStack file

Preserving user freedoms in the cloud is a tricky business and one of the major challenges ahead for Free Software. By easing the deployment of Debian-based private clouds we want to help our users in resisting the lure of giving up their freedoms in exchange of some flexibility, said Stefano Zacchiroli, Debian Project Leader.
For Lars Kurth, Community Manager at, Debian and Xen have a long uninterrupted history: thus, I am really pleased that Debian is the first Linux distribution to contain XCP packages. Until now, it was only possible to use XCP in Linux appliances within a tightly controlled environment. In Debian Wheezy we changed how users interact with XCP, providing much more flexibility and enabling anybody to use Debian as a XCP Dom0 kernel. This enables Debian users to build cloud services based on the leading Free Software virtualization platform that is powering some of the largest clouds in production today.
Having OpenStack packages included in Debian confirms the great job done by the OpenStack community to deliver high-quality free/libre software. It's exciting to collaborate with the Debian developers to enable building clouds based entirely on Free Software, added Stefano Maffulli, OpenStack Technical Community Manager.
Please let us know if you encounter any problem, using the Debian bug tracking system.

Read More - Click Here!

Difference Between Linux and Windows

What’s The Difference Between Linux and Windows

(Jeff Hough @ Business2Community) Although Windows dominates the Desktop operating system world, eclipsing Linux, Mac OSX and all others combined, that’s not the case when it comes to web servers, with Linux being the most popular. There are pros and cons for both systems, and the choice depends on your specific requirements.

Firstly, what kind of development language or database are you planning to use? Universal languages such as HTML and CSS can be run from either Linux or Windows hosting systems, both can support MySQL databases, while Windows systems can only support Microsoft SQL (MSSQL), and generally the decision comes down to whether you’ll be using PHP or ASP to build your website.

Linux supports the most common languages and databases, including PHP, Perl, Python and CGI scripting, which are the standard for web pages that require podcasts, shopping carts, and blogging software such as WordPress.

Windows hosting is designed for users who will be using Microsoft’s ASP, ASP.NET, MSSQL or Access databases to backend their websites. It is possible to include such features as blogs and podcasts, and create a shopping cart, although ASP and ASP.NET applications typically require programming on your computer before they can be uploaded and used online. The operating system of your PC has little bearing on the web server software when building your site.

Linux web servers support almost all common proprietary software control panels, including Plesk, cPanel, DirectAdmin, H-Sphere, and Virtualmin Pro, while also supporting a myriad of open source control panels. Options for accessing Windows hosted control panels are fewer, with Plesk being the most common one, which offers only two open source options.

Linux is often considered to be safer than Windows from a security perspective, however, it depends more upon the server setup and the administrators running the server than the operating software itself. As long as the server is managed and maintained conscientiously, with the latest security and performance patches installed, as well as configured for optimal security by an expert, there is little difference between the two.

An important point to note is that Linux open source software, which enables it to be flexible and more customizable than a Windows web hosting system. This open source model also means that companies are not charged for using it on their servers, making it more cost effective. Hosting companies will pass on the cost of licensing Windows to the user, which will almost always makes it the more expensive option.

Ditching Windows - liberating your computing experience

(Joe Jejune @ TechGuruDaily) Microsoft may be getting on to Windows 9 soon, but it's time to move on for the greater good.

What's really valuable to you in your digital world?

Well, you have your data. Contact lists, photographs, documents, and stuff like that. Then, you have your communications such as email and IM. You have your network. It could be Facebook. It could be Twitter. It could be LinkedIn. Could be a lot of things.

When it comes to your data you can pretty much store it for years and years on a physical drive or in the cloud. It's amazing how much junk you can hold on to when there is no cost and there is no consequence. 

If you hoard physical junk you may end up dying under a pile of trash surrounding by 30 cats. If you pile digital crap you know that someone is just going to give you more storage for free because they realize they didn't give you enough free storage previously.

What does that have to do with Windows?

To start with, you don't need a single product from Microsoft to be able to do anything that is of any value to you in the digital realm.

Secondly, Microsoft doesn't make products for you. It makes products for corporations, their IT departments, and the network of resellers and systems integrators that use its products to build their own services and products on top of them.

You are essentially tying yourself into a very bloated infrastructure that has no end in sight or any hope of adapting to  new software paradigms.

Related: Touch-optimized Microsoft Office gets to Android before Windows

For example, OS X and Linux have both had virtual desktops for years. Microsoft is just about to get them in Windows 9. Then there is the UI thing with Microsoft. They are still struggling with the Start button. What to do, what not to do. 

In fact, Windows 9 is going to take changes in Windows 8 and dial them back apparently.

Rather than being a sweeping overhaul in the vein of Windows 8, Windows Threshold actually appears poised to dial back the gargantuan changes found in that operating system. Microsoft has already announced that the Start menu is coming back to the OS, joined by the ability to run Metro apps in discrete desktop windows. Recent leaks (including Foley and Warren's own reports) suggest other, mostly pro-desktop tweaks are inbound, including virtual desktop support, the removal of the Charms bar, and—possibly—the introduction of Windows Phone 8.1's sassy virtual assistant, Cortana, to Windows proper.

So, just get rid of Windows. Move on. Start admitting to yourself that you don't need it and that you would be better off to ditch it.

You may not like the Genius Bar at an Apple Store. You may never use a Linux system, ever, but you are mobile. You can find an email provider, and docs, and storage, and social media, and everything that you need somewhere else.

It will be better. It will feel like a weight has been lifted off of you. The experience will be lighter. You will feel young again.

You will actually do Microsoft a favor because it may start to understand that less is more. Less legacy. Less application. Less bloat. 

That sounds like Microsoft bashing. It isn't meant to be. Sometimes you have to be cruel to be kind. If the average user rebels against Microsoft, which to some extent they have then, the business world will follow.

What would be great is to see Windows replaced as a platform everywhere because it will free up data and processes from applications that are not only showing their age but are holding people back. Technology and software development should embrace the world of Agile Development, mobile applications, BYOD, and a culture that is more Google and Facebook than Microsoft and Dell.

Ditching Windows - liberating your computing experience

We know this. So, why are we afraid of stating the obvious? Window 9 is coming. So what. We're moving on.

Microsoft is a multi billion corporation. It can make a drastic change and embrace a new approach because it has the cash and the following to do so. It would be painful but less so than sticking to the existing model which just seems to be heavy metal industrial and very little digital.

What truly matters is your data, your processes and your ability to do great things with them. The platform is the Word Wide Web. The client is mobile. Nothing that Microsoft offers today makes you feel like they are part of our modern digital experience.

We should stop Windows before it gets to Windows 10. Enough is enough. Sometimes you just need to press the reset button on everything.


Drupal 7 Event Calendar

Prepare an Event-Calendar for Drupal 7

Last updated September 27, 2013. Created by wusel on March 11, 2012.
Edited by colan, scalico, wmostrey. Log in to edit this page.

This Cookbook shows, how you can prepare an event-calendar for Drupal 7, using the new method to create new calendars from a template.


Step 1: Before we start

  1. If not enabled, download the modules Ctools (, Calendar (, Date ( and Views ( and install them to your modules-path like sites/all/modules.
    Visit '' and enable the modules "Calendar", "Date" and "Views UI". After Clicking on "Save configuration" click "Continue" to enable all required dependencies too.
  2. Visit '' and click on "Add content type".
    Name ='Event', Description like 'An event with a title, date/time information and a body for storing details.'. Then click on "Save and add fields".
  3. In line "Add new field" select Type of data to store = "Date (ISO format)", Label ="Event-Date", Field name = "field_event_date", Form element to edit the data = "Text field", then click on "Save".
  4. On the page "Field settings": set "Collect an end date" to yes and "Time zone handling" to "No time zone conversion" [1] and then click on "Save field settings".
  5. On the page "Event settings" set "Required field" to yes and then click on "Save settings".
  6. Only if you want:
    Visit '': on "Custom display settings" set all options (like "Teaser") to no and then click on "Save".
  7. Visit '' and add "Calendar - A calendar view of the 'field_event_date' field in the 'node' base table.". On "/admin/structure/views/view/calendar/edit" click on "Save".
  8. Visit '' and do the Calendar Administration.
  9. Add a second menu-entry if you like, e.g.: Menu link title = "Event-Calendar", Path = "calendar-node-field-event-date".
  10. Set the calendar-blocks for "View: Calendar" visible as you like on ''.

Step 2: Add one or more events

To test: Click on "Add content" in the Navigation-menu, select "Event":
Title = "My first event", Body like "This is the first event", change nothing, Click on "Save".
Later you have to enter/import the event-time in the timezone of the location of the event. [1]

Step 3: View the event-calendar

Visit ''.


[1]: This is the recommended setting, which later cannot be changed, without deleting all events prior to that! If you choose the "Time zone handling" to "No time zone conversion", then there is no changing of the time of the event between adding/importing and viewing, neither in summertime nor in wintertime. You have to enter/import the event-time in the timezone of the location of the event (like the flight-time on an airport, this is always local time/local timezone. The difference between the time of departure and arrival is different from the flighttime, if the timezones of the two airports are different).

[2]: If you use the module Feeds ( to import events, please patch Feeds until issue is patched to the module.

[3]: If you update the calendar-module it may be necessary to delete the old calendar-view and create a new calendar-view using this Cookbook.

Drupal 7 Galleriffic Installation

Last updated June 21, 2013. Created by acouch on May 15, 2012.
Edited by peterx,, tax14. Log in to edit this page.

1. Prerequisites.

Install Drupal modules Views, EVA: Entity Views Attachment, and Views Galleriffic. Login into your site as webmaster, and in the administration toolbar: Modules, group of modules called “Views”, enable “Eva”, “Views”, “Views Galleriffic” and “Views UI”.

2. Add two new Image Style Presets:

a. Name 1 'galleriffic_slide'.

  • Choose Configuration→ Media and click Image styles to display the Image Styles screen.
  • Click Add style and type galleriffic_slide in the Style name box as shown in the following image.

Adding galleriffic_slide style preset

  • Click the Create new style button and and Drupal displays the Edit Style galleriffic_slide window.

b. Add 'Scale and Crop' efffect.

  • In the EFFECT Box, choose Scale and Crop and click the Add button.

c. Add desired 'Width' and 'Height'. 500 x 400 is recommended.

  • In the Add Scale and Crop box displayed by Drupal shown below, enter the desired Width and Height. Recommended values are 500 x 400. Then click the Add Effect button. Then click Update Style and close the window.

adding preset effect settings

d. Name a second Image Style preset 'galleriffic_thumb'.

  • Follow the steps described in 2a, 2b and 2c to add an Image Style galleriffic_thumb, add Scale and Crop effect and add the desired width and Height settings. The recommented settings for Width and Height are 75x75.

3. Add Gallery content type

In the administration toolbar, go to Structure → Content types → Add Content Type.
Adding galleriffic_gallery content type

  • Type Gallery in the name textbox and click the Save and Add Fields button and Drupal displays the MANAGE FIELDS window for the Gallery content.

a. Add 'Gallery Image' field

(If you are not in the MANAGE FIELDS window, in the administration toolbar, go to Structure → Content types and click manage fields in the Gallery row.)

  • Type “Image” in the “Add new field” text box. In the same row, “MACHINE NAME” column, click on “edit”, type gallery_image and choose “Image” in the combobox in the “FIELD TYPE” column as shown.

Adding image field

  • Click the “Save” button to save the image field.
  • Click “Save field settings” button in the “FIELDS SETTINGS” window and Drupal now displays a “Gallery Settings” window.
  • Select 'Enable Alt field', 'Enable Title field' and 'Unlimited' under "Number of values" as shown.

Select 'Required' and 'Unlimited' under "Number of values".

  • Click the Save Settings button.

b. Select 'Hidden' for the gallery image in the "Manage display" setting.

  • Click the MANAGE DISPLAY tab. The FORMAT column for the image row contains Image. Click Image in the FORMAT column and choose Hidden. Your screen should look similar to the following image.

Exclude under display settings.

  • Click the Save button to save settings.

This will keep the images from showing up on the node. We are going to use Views to show the images on the node. You have defined all settings. Now, its time to test the stuff. Go ahead a create a Gallery node with images.

4. Create a Gallery node with a number of images

In the administration toolbar, go to Content → Add content → Gallery, fill the field “Title” and, below the body, use the buttons “Browse” and “Upload” to choose pictures from your computer and upload them, one by one, to the web server.
Adding gallery node
At the end, don't forget to click on the “Save button”, located at the bottom of the page.

5. Create a new View name 'Galleriffic Node Gallery'.

In the administration toolbar: Structure → Views → Add New View.
Creating new view

  • Uncheck Create a Page. Do not select block or page. You can select content by gallery. Note that it says "Gallery" above instead of "Galleriffic Node Gallery". You can use either. Click Continue and Edit.

a. Remove "Content: Title" field from "Fields".

On the “Gallery (Content)” page displayed by Drupal, click “Content: Title” in the “FIELDS” section and click the “Remove” button.

b. Add "title" field from your gallery node

This might be a little confusing because we are going to override the field output.

  • Click the Add button in the FIELDS section and you see an image similar to the following.

adding field

  • Select 'Content: Image' . (Fields names on your screen may be different if you used a different field name earlier.) You may type Image in the search box to limit the number of fields displayed on the screen. Make sure that the field you choose is from the gallery (e.g. node:gallery and not article, e.g. node:article ).
  • Click Add and Configure fields button.

Drupal displays Configure field: Content: image screen.

  • Enter 'Title' in the Label box, as shown in the following image.


  • In the same screen, click REWRITE RESULTS to expand it, if required. Then check Rewrite the output of this field and enter [field_gallery_image-title] in the "Text" box, as shown in the following image.

adding token

  • Click MULTIPLE FIELD SETTINGS (located above REWRITE RESULTS section) and uncheck Display all values in the same row, as shown in the following image. Notice that this is important.


  • Click the Apply button.

Drupal returns you back to the View you are creating. It displays the Content: Image (Title) in the FIELDS section. We need to add three more fields.

c. Add "description" field:

  • Click the Add button in the FIELDS section..
  • Select 'Content: Image' as in the previous step (5b).
  • Click Add and Configure fields button.
  • Enter 'Description' in the Label textbox.
  • Enter [field_gallery_image_1-alt] instead of '[field_gallery_image-title]' under "Rewrite results" section.
  • Click MULTIPLE FIELD SETTINGS (located above REWRITE RESULTS section) and uncheck Display all values in the same row, as shown in the following image. Notice that this is important.


Note: If you don't see the MULTIPLE FIELD SETTINGS section on the Configure field screen, you might have selected node:article field instead of node:gallery. In that case, delete the field and try again.


  • Click the Apply button.

d. Add slide field:

  • Select 'Content: Image' in the field list, same as above.
  • Enter 'Slide' in label.
  • Select galleriffic_slide under "Image style" as shown in the following image.

adding imagefield

  • Click MULTIPLE FIELD SETTINGS and uncheck Display all values in the same row, as shown in the following image. Notice that this is important.


  • Click the Apply button.

e. Add thumbnail field:

  • Repeat steps above substituting 'Thumbnail' for 'Slide'.

Adding thumbnail field

f. Add Format Settings:

  • Click the first item (Unformatted List) in the FORMAT section as shown in the following image.

selecting format

  • On the Master: How should this view be styled dialog box displayed by Drupal, select 'Galleriffic Gallery' and click the Apply button.

selecting format

Drupal displays the Master Style options as shown. These are used to define settings for the gallery. Defaults should be fine to start out with. You may changes these later.

Master Style Options

  • Click the Apply button to close the Master Style Options dialog box.

g. Add "Show" settings:

  • Click Fields against Show in the FORMAT section to display the "Master: How should each row in this view be styled" dialog box. Click 'Galleriffic Fields'.
  • Click the Apply button.
  • Select the correct field for each type of row as shown.

Selecting rows

6. Add View to Gallery node.

  • Click ' Add' and choose 'Entity content'.

adding display
If you do not have the EVA module installed, you will not see this option.

  • In the ENTITY CONTENT SETTINGS in the middle of the screen, click “None” against Entity type and select 'Node' under "Entity type".

selecting entity type

  • Select 'Gallery' under "Bundles" in ENTITY CONTENT SETTINGS.

selecting entity type

  • Add a "Contextual Filter" (under "Advanced") and select 'Content: Nid'.

adding contextual filter

  • Select 'Provide default value' and 'Content ID from URL' in the argument.

configuring argument

  • Click Save to save the view.

7. You are done! Return to your gallery node and see your wonderful gallery!

Drupal 7 Galleriffic Step by Step Instructions for Views

Step by Step Instructions for Views Galleriffic in Drupal 7

Last updated June 21, 2013. Created by acouch on May 15, 2012.
Edited by peterx,, tax14. Log in to edit this page.

1. Prerequisites.

Install Drupal modules Views, EVA: Entity Views Attachment, and Views Galleriffic. Login into your site as webmaster, and in the administration toolbar: Modules, group of modules called “Views”, enable “Eva”, “Views”, “Views Galleriffic” and “Views UI”.

2. Add two new Image Style Presets:

a. Name 1 'galleriffic_slide'.

  • Choose Configuration→ Media and click Image styles to display the Image Styles screen.
  • Click Add style and type galleriffic_slide in the Style name box as shown in the following image.

Adding galleriffic_slide style preset

  • Click the Create new style button and and Drupal displays the Edit Style galleriffic_slide window.

b. Add 'Scale and Crop' efffect.

  • In the EFFECT Box, choose Scale and Crop and click the Add button.

c. Add desired 'Width' and 'Height'. 500 x 400 is recommended.

  • In the Add Scale and Crop box displayed by Drupal shown below, enter the desired Width and Height. Recommended values are 500 x 400. Then click the Add Effect button. Then click Update Style and close the window.

adding preset effect settings

d. Name a second Image Style preset 'galleriffic_thumb'.

  • Follow the steps described in 2a, 2b and 2c to add an Image Style galleriffic_thumb, add Scale and Crop effect and add the desired width and Height settings. The recommented settings for Width and Height are 75x75.

3. Add Gallery content type

In the administration toolbar, go to Structure → Content types → Add Content Type.
Adding galleriffic_gallery content type

  • Type Gallery in the name textbox and click the Save and Add Fields button and Drupal displays the MANAGE FIELDS window for the Gallery content.

a. Add 'Gallery Image' field

(If you are not in the MANAGE FIELDS window, in the administration toolbar, go to Structure → Content types and click manage fields in the Gallery row.)

  • Type “Image” in the “Add new field” text box. In the same row, “MACHINE NAME” column, click on “edit”, type gallery_image and choose “Image” in the combobox in the “FIELD TYPE” column as shown.

Adding image field

  • Click the “Save” button to save the image field.
  • Click “Save field settings” button in the “FIELDS SETTINGS” window and Drupal now displays a “Gallery Settings” window.
  • Select 'Enable Alt field', 'Enable Title field' and 'Unlimited' under "Number of values" as shown.

Select 'Required' and 'Unlimited' under "Number of values".

  • Click the Save Settings button.

b. Select 'Hidden' for the gallery image in the "Manage display" setting.

  • Click the MANAGE DISPLAY tab. The FORMAT column for the image row contains Image. Click Image in the FORMAT column and choose Hidden. Your screen should look similar to the following image.

Exclude under display settings.

  • Click the Save button to save settings.

This will keep the images from showing up on the node. We are going to use Views to show the images on the node. You have defined all settings. Now, its time to test the stuff. Go ahead a create a Gallery node with images.

4. Create a Gallery node with a number of images

In the administration toolbar, go to Content → Add content → Gallery, fill the field “Title” and, below the body, use the buttons “Browse” and “Upload” to choose pictures from your computer and upload them, one by one, to the web server.
Adding gallery node
At the end, don't forget to click on the “Save button”, located at the bottom of the page.

5. Create a new View name 'Galleriffic Node Gallery'.

In the administration toolbar: Structure → Views → Add New View.
Creating new view

  • Uncheck Create a Page. Do not select block or page. You can select content by gallery. Note that it says "Gallery" above instead of "Galleriffic Node Gallery". You can use either. Click Continue and Edit.

a. Remove "Content: Title" field from "Fields".

On the “Gallery (Content)” page displayed by Drupal, click “Content: Title” in the “FIELDS” section and click the “Remove” button.

b. Add "title" field from your gallery node

This might be a little confusing because we are going to override the field output.

  • Click the Add button in the FIELDS section and you see an image similar to the following.

adding field

  • Select 'Content: Image' . (Fields names on your screen may be different if you used a different field name earlier.) You may type Image in the search box to limit the number of fields displayed on the screen. Make sure that the field you choose is from the gallery (e.g. node:gallery and not article, e.g. node:article ).
  • Click Add and Configure fields button.

Drupal displays Configure field: Content: image screen.

  • Enter 'Title' in the Label box, as shown in the following image.


  • In the same screen, click REWRITE RESULTS to expand it, if required. Then check Rewrite the output of this field and enter [field_gallery_image-title] in the "Text" box, as shown in the following image.

adding token

  • Click MULTIPLE FIELD SETTINGS (located above REWRITE RESULTS section) and uncheck Display all values in the same row, as shown in the following image. Notice that this is important.


  • Click the Apply button.

Drupal returns you back to the View you are creating. It displays the Content: Image (Title) in the FIELDS section. We need to add three more fields.

c. Add "description" field:

  • Click the Add button in the FIELDS section..
  • Select 'Content: Image' as in the previous step (5b).
  • Click Add and Configure fields button.
  • Enter 'Description' in the Label textbox.
  • Enter [field_gallery_image_1-alt] instead of '[field_gallery_image-title]' under "Rewrite results" section.
  • Click MULTIPLE FIELD SETTINGS (located above REWRITE RESULTS section) and uncheck Display all values in the same row, as shown in the following image. Notice that this is important.

importantNote: If you don't see the MULTIPLE FIELD SETTINGS section on the Configure field screen, you might have selected node:article field instead of node:gallery. In that case, delete the field and try again.

  • Click the Apply button.

d. Add slide field:

  • Select 'Content: Image' in the field list, same as above.
  • Enter 'Slide' in label.
  • Select galleriffic_slide under "Image style" as shown in the following image.

adding imagefield

  • Click MULTIPLE FIELD SETTINGS and uncheck Display all values in the same row, as shown in the following image. Notice that this is important.


  • Click the Apply button.

e. Add thumbnail field:

  • Repeat steps above substituting 'Thumbnail' for 'Slide'.

Adding thumbnail field

f. Add Format Settings:

  • Click the first item (Unformatted List) in the FORMAT section as shown in the following image.

selecting format

  • On the Master: How should this view be styled dialog box displayed by Drupal, select 'Galleriffic Gallery' and click the Apply button.

selecting format
Drupal displays the Master Style options as shown. These are used to define settings for the gallery. Defaults should be fine to start out with. You may changes these later.

Master Style Options

  • Click the Apply button to close the Master Style Options dialog box.

g. Add "Show" settings:

  • Click Fields against Show in the FORMAT section to display the "Master: How should each row in this view be styled" dialog box. Click 'Galleriffic Fields'.
  • Click the Apply button.
  • Select the correct field for each type of row as shown.

Selecting rows

6. Add View to Gallery node.

  • Click ' Add' and choose 'Entity content'.

adding display
If you do not have the EVA module installed, you will not see this option.

  • In the ENTITY CONTENT SETTINGS in the middle of the screen, click “None” against Entity type and select 'Node' under "Entity type".

selecting entity type

  • Select 'Gallery' under "Bundles" in ENTITY CONTENT SETTINGS.

selecting entity type

  • Add a "Contextual Filter" (under "Advanced") and select 'Content: Nid'.

adding contextual filter

  • Select 'Provide default value' and 'Content ID from URL' in the argument.

configuring argument

  • Click Save to save the view.

7. You are done! Return to your gallery node and see your wonderful gallery!

Drupal 7 Image Gallery Tutorial

(Bryan Braun @ This tutorial describes step-by-step how to build a basic thumbnail based image gallery, based on Views. The result would look something like this:

(you can see a live demo at

The instruction is designed for Drupal site builders or admins with a basic understanding of Views and Fields. It assumes you are running Drupal 7.x and Views 3.x.

Step 1: Preparation


A views image gallery uses several modules. You can download and install all of these modules (and the modules they require) at /admin/modules/install.

Enable the following contributed modules...

  • Views - all purpose "reports" generator for Drupal
  • Chaos Tools - a tools library required by Views well as these core modules:

  • Field
  • Image
  • File

Step 2: Content Configuration

Step 2a: Create Content Types

We need to create a new content type for images we put in our gallery.

  1. Browse to /admin/structure/types and create a new content type called Gallery Image.
  2. Add a field to the content type called My Gallery Image. This will allow you to upload an image when creating content.
  3. Optional: remove any unnecessary fields like the body field. This content type only needs to be able to upload an image.

Note: Feel free to use use whatever names you like for these fields and content types

Step 2b: Upload some photos as dummy content

  1. Browse to /node/add and add content using the Gallery Image content type we just created
  2. Use the "My Gallery Image" field to upload one of the photos that you want in your gallery.
  3. Set any other settings as necessary and save the content.
  4. Repeat the steps above until you have loaded up 3 - 5 photos for dummy content

Step 2c: Create an image style for your thumbnail

  1. Go to /admin/config/media/image-styles and click Add Style to add a style named gallery_thumbnail.
  2. Add an effect of "scale and crop" (several other effects may work for you, so feel free to play with these settings)
  3. Set the width and height to be 150 pixels
  4. Update the effect

It ought to look something like this:

Step 3: Build the View

Step 3a: Create a Gallery View

To display images in a Gallery we will create a view that displays every piece of content you have published under the Gallery Image content type.

  1. Go to /admin/structure/views and click Add new view
  2. For the view name, call it "Photo Gallery"
  3. Set the view to Show Content of type Gallery Image sorted by Unsorted
  4. Check the box to Create a block (and uncheck the Create a page box if necessary)
  5. Name the block title "Photo Gallery"
  6. Set the display format to Grid of fields
  7. Set to 10 Items per page, check to use a pager and click save and exit

Step 3b: Views Configuration

We'll have created a view with a block display and now we need to ensure that all our settings are correct.

First, locate the view in your list of views and click the link to edit it. Make sure your settings match those listed below:

  • Display name: 'Photo Gallery'
  • Title
    - Title: 'Photo Gallery'
  • Format:
    - Style: 'Grid'; Number of columns: '5'; Horizontal
    - Show: 'Fields'
  • Fields
    - Content: 'My Gallery Image'; Formatter: 'Image'; Image style: 'gallery_thumbnail'; Link image to: 'content'
  • Filter Criteria:
    - Content: 'Published (Yes)'
    - Content: 'Published or admin'

This criteria ensures that a photo won't appear in the gallery unless the photo has been properly uploaded and publish as part of your Gallery Image content type. The resulting set up will look something like this (though yours ought to say "Content: My Gallery Image").

Save the view.

Step 4: Test your setup

Now scroll to the bottom of your view configuration page and check the Auto preview checkbox. If your gallery is being properly displayed in the region below, then you did it! Just save the view and it will be created as a block (remember, that's the option we chose in step 3a, #4). Now you just navigate to the blocks page (Admin Bar > Structure > Blocks) and drop the block into a region to see how the view looks on your site.

Step 5: What's next?

You can do a lot to customize your gallery. Here are some options:

  • Change the pager settings for your view (determines the default number of thumbnails in your gallery)
  • Style the view output for your view using CSS (I like CSS Injector, since it's quick and easy for beginners)
  • Add a Page display to the view, so it exists on its own page (learn more about displays on
  • Change Image styles to change thumbnail size or image scaling/cropping in the thumbnail
  • Uploading more images
  • Allow rating of images (using modules like Voting API, or Fivestar)

Other Resources

  • Views Gallery module by KarenS -- A Drupal 6 module that uses this methodology with hardcoded content types.

Note: I wrote this tutorial as part of a Drupalcon Denver documentation sprint and I originally saved it in the views issue queue (which I'm pretty certain was the wrong place to put it). Anyways, I'm reposting it here so I could clean it up, include images, add links, and make it more findable. I certainly could have used this when I was figuring this out. Cheers!

Drupal 7 Juicebox HTML5 Responsive Image Galleries Module

Juicebox HTML5 Responsive Image Galleries

Last updated January 10, 2014. Created by rjacobs on May 21, 2013.
Edited by jlindsey15. Log in to edit this page.

The Juicebox module helps integrate the Juicebox HTML5 responsive gallery library ( with Drupal. Juicebox is in many ways the successor to Simpleviewer and offers a powerful and responsive image gallery front-end that's based on HTML5. See the project page for a detailed feature overview.


  1. Install and enable the required Libraries API module (version 2.0 or above) from
  2. Download the 3rd party Juicebox library from and extract it to a temporary location. Both the Lite (free) and Pro versions should work fine with this module, and the one you should choose depends on how much formatting flexibility you require.
  3. Copy the appropriate core Juicebox library files that you just extracted to Drupal's library directory. Typically, this means you will create a new directory called /sites/all/libraries/juicebox and then copy the full contents of the the Juicebox "jbcore" directory to this library directory. You will end up with a structure like /sites/all/libraries/juicebox/juicebox.js and /sites/all/libraries/juicebox/classic/themes.css, etc.
  4. Install and enable this Juicebox module.

If for any reason you enable the Juicebox module (step 4) before installing the Juicebox library (step 3), or if you make changes to the Juicebox library itself, please be sure to clear your Drupal cache at /admin/config/development/performance. This will ensure that the correct Juicebox library information is detected by the Libraries API.

Advanced installation note: If you plan to use Juicebox galleries with anything other than Drupal nodes, users or views (e.g. special/custom entity types) you may also need to install the Entity API module. Please see these notes for more about this (this is less common and most users can ignore this step).

Usage and Configuration

This module integrates with Drupal on many levels but conceptually it operates just like any other display formatter. It's designed to let you easily turn groups of Drupal-managed image data into Juicebox galleries without making too many assumptions about how your site is structured or what media management strategy you use.

Basic Field-Based Galleries (the Field Formatter)

Users who simply want to add galleries to individual nodes/entities, and manage them individually, can use the Juicebox field formatter. With this method most any multiple-value image or file field can quickly be displayed as a Juicebox gallery.

If you are familiar with other Drupal field formatters this method should be fairly straightforward. When working with the "display" options for your entity (such as the "Manage Display" tab for a node content type) you can simply select the "Juicebox Gallery" format option for any image or file field, and then tweak a number of formatter-specific options to your liking. The Juicebox field formatter (7.x-2.x) is also compatible with Media and File Entity, so file fields constructed through Media widgets (e.g. using reusable images from a global media library) can also become Juicebox galleries, and can even leverage file field data for titles and captions.

Browse additional notes and step-by-step directions related to the Juicebox field formatter.

Views and More Advanced Media Management (the Views Style Plugin)

If you need to group image data from multiple nodes/entities/files into galleries, and leverage the flexibility of views to organize everything, you can use the Juicebox views style plugin. This method allows Juicebox to be adapted to more advanced media management setups where image data is stored in dedicated entities/content types and where more complex organizational tools may be needed (taxonomy and contextual filters, etc).

With this formatter any views that lists files, or content containing image/file fields can become Juicebox galleries. These views may be based on your own design and information-architecture, or be provided by other gallery-like modules such as Node Gallery.

Browse additional notes and step-by-step directions related to the Juicebox views style plugin.

Managing Image Styles

The Juicebox module integrates with Drupal's core image styles so you have the ability to automatically scale your images to appropriate dimensions or add effects. Any core image styles that you create at /admin/config/media/image-styles will be available when configuring which image sources to use for your gallery images and thumbnails. For more information about working with core image styles see these notes in the Drupal handbook.

No matter how big your images are, Juicebox will always display them relatively responsively, but some upper-limits apply. This is simply because even large desktop screens may not be able to display the full resolution of your originals. Also keep in mind that even if you use a scaled version of each image for display, the full-resolution (unscaled) version can still be be made available to users by enabling the "Open Image Button" (in the "Lite Config" options for the gallery). In short, it's not worth it to go overboard with your image resolution.

It is also possible to implement adaptive/fluid image concepts with Juicebox. This means that each user's maximum device dimensions are detected and device-appropriate scaled versions of each image are delivered to their browser. This can dramatically reduce bandwidth and load times on small devices. In this reagrd the Adaptive images styles (AIS) module can be used with Juicebox to deliver this functionaltiy. Setting this up is as simple as installing the AIS module and then setting Juicebox to use the "adaptive" image style for your main gallery images. See this post for more info.

Multilingual Considerations

Because the Juicebox module leverages native Drupal elements (fields and file metadata) for image titles and captions, no special techniques should be required for content translation in multilingual sites. As long as your title and caption sources are translated with existing Drupal tools, like the Content Translation or Entity Translation modules, the text within your galleries can be language aware.

It is also possible for the Juicebox interface (tooltips for icons/buttons, etc.) to be translated and toggled automatically with the rest of your Drupal interface. To enable interface translations browse to the global Juicebox settings at admin/config/media/juicebox and check the "Translate the Juicebox javascript interface" option. You will then be able to see and customize the "Base string for interface translation", which represents the base English text that Drupal will attempt to translate (based on the user's active language) before passing it to the Juicebox library. Most users will not need to change this default base string. Once this interface translation option is enabled it's still up to you to actually enter a translation for the base string in your site, typically with the Locale module's "translate interface" tool at admin/config/regional/translate. See here for more details.

Additional Notes from the Issue Queues

  • Compatibility with Theme Developer module. There is a known incompatibility between this module and the Theme Developer module. Nothing serious will happen if the two are enabled side-by-side, but your galleries may not display correctly until Theme Developer is disabled. More details are available here.
  • Launch straight to full-screen. If you are comfortable with custom Drupal theming techniques, you can setup your galleries to launch directly into a "full screen" (full window) mode. See these notes for details.
  • Performance. If your gallery pages receive very high levels of traffic that cannot be serviced by standard Drupal caching, you may be interested in these performance notes.

Drupal 7 Multilingual Entity Translation

Localized and Multi-Lingual Content in Drupal 7

The new (and old) translation systems, and how they work

The way that Drupal manages translations has been evolving over several versions of Drupal. It has always been somewhat daunting to figure out how to set up a multilingual site in Drupal, and it requires a combination of core and contributed modules to make it work well. In Drupal 7 we have some great new features, but we also ended up with two different systems of managing content translation, so there are also lots of new questions and options. If you're new to Drupal's multilingual system, or new to Drupal 7, you'll have lots of questions about how to get this working well.

To make this whole process easier we'll cover the following in this article:

  • Understanding the difference between interface and content translation
  • Discuss the two alternative systems for content translation in Drupal 7 and how they differ
  • Walk through the installation and set up of a D7 multilingual site
  • Provide an extensive list of modules, articles, and resources that may be helpful

Interface Translation

There are two basic components to the translation system, the translation of the interface and the translation of content.

Translating the interface has to do with the translation of miscellaneous text strings used all over the site (like the label used on Submit buttons). These are elements that are the same on all sites no matter what actual content it contains. Because these strings are standardized, Drupal is able to create a system to provide everyone with translated values for all these elements in various languages. To take advantage of this you enable the core Locale module, which will allow you to grab translated text from the Drupal Localizer site and import them into your site. Then voila!, you will have French or German versions of the interface text on your site, without any need to translate them yourself.

For the developers in the room, the heart of the interface system is the strings that are passed through the t() function. You can see how many of these strings have been translated into various languages in the graph on the home page at Drupal Localizer site. That site serves as a central location used by translators from around the world to maintain interface translations for all Drupal projects, both core and contributed modules.

Content Translation

Once you have the interface translated, the real challenge is to find a good way to translate your content. No one else has the same content you have, there is no way you can download that automatically. To do this you need a place where you can store the translated version of the content (in as many languages as you need) and a system that will choose which content to display where.

This problem was solved in earlier versions of Drupal by creating a complete copy of each node that needs translation. So the French node would have all the French values of the content and the English node would have all the English values. Then they are organized together in translation sets, so Drupal knows which ones are the 'same' content.

This is the system used in older versions of Drupal, and it is still available in Drupal 7, if you enable the core Content Translation module.

Entity Translation

In Drupal 7, a new model for content translation was created. In this system each piece of content consists of a single node, but each field on the node can have multiple copies, in different languages, all attached to the same entity.

The API to get this system working went into core, but there was no time (and not enough agreement) to get a UI into core. So the Entity Translation module was created to provide a way for site administrators and translators to use the new field translation system. (In the illustration above, note that the title is only translatable if you use the Title module.)

The most confusing thing about the D7 translation system is that both these systems are available in D7. You can even use one system for one content type and the other for another content type. If you only want the D6-style translation set system, you would enable the core Content Translation module and not use the contributed Entity Translation module at all. If you only want the D7 per-field translation system, you would not enable the core Content Translation module and instead enable the contributed Entity Translation module.

The Pros and Cons

There are pros and cons of each system. The original system of creating translation sets was the easiest way to solve the problem before the new field system went into core, but it leaves you with multiple nodes that are really the 'same'. That can be a problem for SEO and is also a problem when combined with some contributed modules. For instance, if you have a node that describes an event and you want people to sign up for the event, you don't want them signing up separately for the English and French nodes, you want all the sign ups on the same node.

But if you have a single node with translated fields, it becomes harder to do things like have different menu entries or workflows for each language. The Content Translation Models Debate is a great screencast by Gabor Hojtsy that illustrates some of the pros and cons of each alternative. Note that many of the problems for each model are solved, at least to some extent, with contributed modules, which is one reason why it takes so many modules to make the system work.

If you don't know which model to use, or if you think they seem to be equally suited to meet your needs, you probably should opt for the newer model that allows you to have multiple field translations on a single node. There are still a few problems with this model for some use cases, but there are people working to solve those problems and make this system flexible enough to work everywhere.

Creating a Site Using Entity Translation

To illustrate how this works in Drupal 7, let's walk through how to set up a site that will translate content into several languages, using the per-field translation model. We'll create a clean D7 install, set up as an English speaking site, and then add multilingual capability to it.

The minimum list of modules we will need include:

To add some usability to the site, we'll also use:

Add Localization

Start by enabling Locale and Localization Update. That will give us two new options on the Administration Configuration screen.

First, go into the Language section and add whatever languages are desired, by clicking on the Add language link and selecting the language.

Then select the Detection and Selection tab. This is where you determine how Drupal will determine which language to display to the user. For most sites the url option will make the most sense. With this option Drupal will send the Spanish users to a path prefixed with 'es' and the French users to a path prefixed with 'fr'. You can select more than one option, and re-arrange them to indicate what order they will be tested.

The other option on the Administration Configuration screen, Translate Interface, contains tools to import, update, and manage the interface strings mentioned above.

You can see that you can tell what languages are installed, see the strings that are being translated and what the translations look like, and even update them from this screen. You won't need any of these right now, but this is where to go if you need to make changes or update these values in the future. The Update tab was added by the Localization Update module, and it makes it easy to see if you have the latest translations for all of your enabled modules.

Next go to the Content Type edit page and poke around in the vertical tabs at the bottom of the page, you will see there is now a new option to translate that content type.

If you select that option, it will add a new field on the nodes that you create, where you can select a language for that node.

Finally, go to the block administration page and add the Language Switcher block to the page. This will allow the user to choose the interface language they want to see.

At this point we have a site that can create nodes in various languages, but we can't translate one node into multiple languages. We have a site that has localization, but not translation.

Add Translation

To add translation to the site we need to turn on additional modules. Since we've decided to use the Entity Translation model (a single node with fields in multiple languages), we will not enable the core Content Translation module and instead enable the contributed Entity Translation module. Since we need the Title translated as well as other fields, we also need to enable the Title module.

Finally, we will want to translate not only the content of the fields, but also things like the field labels and descriptions. And if we have fields with lists of allowed values we want to be able to create translated versions of those lists. To translate those we need to enable the contributed Internationalization (i18n) package. That is actually a whole suite of modules that fill in some of the gaps left in the translation system. We don't yet need all of the modules in the package, but we want the core module and the Field Translation module (which translates field properties).

We will need to enable the Entity API module and the Variable modules as well, because some of our module depend on them.

After enabling these modules, we re-visit the content type administration page. When we edit each content type we now see a new option to enable Entity Translation for this content type.

On the Manage Fields screen for each content type we now see an option to replace the regular title with a field.

After making this change, the title will be an editable field, just like all the other fields. The title will also be displayed in the content like any other field, so you may want to go to the Manage Fields screen and hide it, since you will see the title at the top of the page already.

For each field that needs to be translated, click on the Field settings link and check the box to translate this field.

If there is already content in this field you will see a message noting that, but you can still change the option to translate the field. This should just serve as a reminder that the content in those fields is not yet translated.

The final step is to create or edit a page that has a language and we now see a new tab on the page in addition to the View and Edit tabs, a Translate tab. This tab takes us to a page that shows us each language we have enabled on the site where we can add a translated version of the content for that page. Note that this tab will only appear on nodes that have a language selected.

There are new options on the node edit page. In addition to the box to select a language, there is a way to flag translations as outdated.

Adding More Features

That is enough to get started with the Drupal 7 multilingual system. There are lots of other modules and features you can add to make the system better. The Internationalization (i18n) module includes additional modules to translate taxonomy terms or menu items or forums. And there are several other modules that provide additional functionality that could be useful. A number of them are listed below. Some are not totally ported to Drupal 7 yet and your mileage may vary, but you can explore all these options, depending on your needs.


Core Modules:

Locale: translate the user interface into different languages and create different date formats for each language.

Content Translation: translate content, where each language is in a separate entity and they are connected in translation sets.

Contributed Modules:

Internationalization, a suite of modules that supplement the multi-lingual capabilities of core, adding capabilities like translating taxonomy, providing language selector blocks, translating variables, and much more:

Entity Translation, allows you to translate individual fields into different languages on the same entity:

Title, makes the node title into a translatable field that can be used with Entity Translation:

Internationalization Views, adds more translation capabilities to Views:

Language Icons, adds little flag icons for each language to the language links:

Language Switcher Dropdown, a nicer language switcher drop down:

Administration Modules:

None of these are required for a multi-lingual site to function correctly, but may be useful to make administration easier.

Localization Update, adds the same update capability for translations as Drupal has for modules, to make it easier to keep translations up to date using the Localize Drupal site:

Localization Client, helps you keep your translations up to date more easily with an on-page UI where you can fix the translations as you navigate the site:

Translation Overview, administration table that shows what content has been translated into what languages:

Translation Table, a table to make it easier to change the text for menus, variables, taxonomy, field names, etc:

Admin Language, let the administrator see all administration pages in a chosen language, no matter what language the site uses:

Install Profiles/Features:

Localized Drupal, an installation profile that automatically sets up Drupal site that is configured to use the multi-lingual system:


Extensive explanation of how the D7 translation system works by Gabor Hojtsy:

Overview of the contributed D7 i18n (Translation) module by Jose Reyero:

The Content Translation Models Debate, a screencast that illustrates the evolution of how multi-lingual content was handled in core in Drupal 5, 6, and 7, and the issues that still need to be resolved in Drupal 8: Translation Handbook page:

An Overview of Field Translation by Randy Fay:

The Localize site where all Drupal translations live:

Field Language API:

Drupal 7 User Directory

  1. Make sure you have installed and enabled Views (and Chaos Tools).
  2. Configure your user form with any additional fields you want by going to admin/config/people/accounts/fields. In my example here, I used Job Title, Location and a field for Full Name, so you can use that instead of the Username in the view, since usernames are often abbreviations. Thinking ahead, consider if you would like any of these fields to be exposed filters on your staff view. Because I want people to be able to find staff based on their job title or location, I made those fields select lists instead of plain text fields.
  3. Add your users at admin/people/create. Once created, edit the user to add a photo and fill in the additional information. (You can expose the additional fields to the User Create form too. Depending on your site, you may not want all those fields on the initial registration page so as to not overwhelm new users, if they are the ones filling out the form.)
  4. Now let's create the View: admin/structure/views/add. Show Users, create a Page, and give it a path so you can find it.
  5. Set the Format to Grid* and Show Fields. Select the fields you would like to expose. I've selected User: Picture, User: Full Name, User: Job Title, User: Location, in that order.
  6. Choose your Sort Criteria. If you want to sort by last name, you may need to create another field for just the last name, or redo your fields to include first and last names as separate fields.
  7. In the Advanced area, I have enabled "Exposed form in a block", so one can place the filters in the sidebar, as opposed to at the top of the view.
  8. Save your View, and visit the blocks page to position your Exposed filters block in the desired region. Also, edit your block so it only appears on that View (and perhaps on user pages).

drupal 7 staff list

And that's basically it; it's ready for theming. The one tricky part is that Drupal 7 still uses an odd image field for the user picture that is not configurable with Image Styles (although it seems to default to the thumbnail Image Style), the core replacement of ImageCache for Drupal 7 (both for the User page and for Views.) In that case, we'll need to theme them manually with theme_image_style()** in your user-picture.tpl.php file. If you want the staff view to have a different size, find the Field User: Picture tpl information under Theme: Information in the views configuration and theme it from there.

Drupal 7 Using the Juicebox views style plugin

Using the Juicebox views style plugin

Last updated January 10, 2014. Created by rjacobs on December 30, 2013.
Log in to edit this page.

Users who want to create Juicebox galleries from multiple nodes/entities/files, and leverage the flexibility of views to organize everything, can use the Juicebox views style plugin.

The notes on this page assume you are using a 7.x-2.x version of the Juicebox module. Not all options/features outlined here will be available in 7.x-1.x, though the same general concepts still apply.

Step-by-Step Setup Example

Drupal Views is an extremely flexible tool, and the Juicebox module integrates with it in a fine-grained way to accommodate a broad set of use cases. Most any views that lists files, or content containing image/file fields can become Juicebox galleries.

The steps below outline a basic case where a content type is setup with an image field and then multiple nodes of that type are gathered for display as a Juicebox gallery. This could be considered a starting point for more complex gallery setups that incorporate other views concepts (e.g., filters, relationships, etc.) or a reference if tweaking existing views to work with the Juicebox formetter.

  1. Ensure the Juicebox module is properly installed along with the views module.
  2. Create a content type to hold the images that will be part of a gallery. Add fields for the image itself (each node will typically only hold one image for this example), a text/html caption field, a title field, etc. You may also choose to add additional fields that views can use for "organizational" purposes (such as a taxonomy reference field to group your images in albums/galleries, etc.).
  3. Add some images to your site, along with the relevant title/caption field data, etc., using the content type you just created.
  4. Create a new view that lists nodes of the content type that you created. Add a standard page display to this view that uses the display format "Juicebox Gallery".
  5. Before configuring any Juicebox-specific settings you must first add fields to your view (from the content type fields that you created earlier) for all of the data that Juicebox will use. At a minimum you must add a view field for the actual image source that Juicebox will display. You can optionally add fields for the title text that will accompany each image, the caption text that will accompany each image, a separate thumbnail image source field (if it should be different from the main image source), etc.
  6. Setup whatever content filters, sorting options, etc. that you like (e.g., using a taxonomy-based contextual filter to setup distinct albums within this single view definition).
  7. Under "format" click "settings" to access the Juicebox-specific display options. Here you can:
    • Specify which of your view fields should be used for each of the Juicebox gallery data elements (image, thumbnail, title, caption). You should have already added a field for each of these to your view, so here you simply need to map each element to the appropriate view field.
    • Specify which image styles to use when displaying images and thumbnails. Note that you first may need to create a new style at /admin/config/media/image-styles if none of the available options are suitable. See the Managing Image Styles notes for more information about selecting image styles.
    • Customize a variety of Juicebox configuration options for this gallery.
  8. Save your view. Note that the preview function in the view admin may not display anything, in which case you will need to navigate to the actual view path to see and test the results.

More Advanced Views Integration

The steps above cover the main integration concepts between the Juicebox module and views, but the possibilities are by no means limited to that example.

  • File fields, file views and Media module. Views that list content containing file fields, or that list files directly, can also be used to make galleries. This is especially handy for people using the Media and File Entity modules as file and file field data (custom fields added directly to files) can be used to construct galleries.
  • Views filters, relationships, etc. Core views concepts (contextual filters, exposed filters, relationships) can all be used as expected allowing multiple galleries and complex gallery information architecture to be built from just one or two views.
  • Text formatting. All formatting settings configured on views fields that are used for captions and titles will be respected. So field rewrites and other views tricks can be used there.
  • Multiple images per view row. If your gallery's image source is based on an image or file field that's multivalued, by default Juicebox will only display the first item from that field. However, this behavior can be altered based on these notes.
  • Views recipes. Some additional notes on views usage can be referenced in the issue queues, such as these notes about building multiple galleries using a taxonomy-based contextual filter and a gallery index.

Usage with existing views (e.g., Node Gallery)

If you have already setup views that list image or file data in some way, or you use a module that implements "bundled" views to manage media, converting those views to use the Juicebox formatter should be fairly straightforward. You just need to ensure that your view display is structured to shows "fields" (as opposed to fully-rendered content) and that it includes fields to directly represent each gallery data element (image, title, caption, etc.). After this, it should be possible to enable the Juicebox formatter and configure it as outlined above.

One popular case of adding Juicebox to an existing view is for compatibility with the Node Gallery module. Node Gallery provides a simplified "out-of-the-box" solution for managing galleries without the need to configure any content types and views from scratch. Because it internally leverages views to manage most of its gallery output, adding Juicebox is just a matter of overwriting (or retrofitting) the correct bundled view to use the Juicebox formatter. The quick steps needed to accomplish this are outlined here.

Drupal 7's new multilingual systems Basics Part 1

This is part one in a series of posts on the new multilingual features in Drupal 7 core and contrib. I was sadly not as involved in the core mutilingual work that I wanted to (was busy working on, so I need a refresher myself on some of the finer details of what is going on. Therefore my journey through the new features, which I thought would be useful for you dear readers too. Thankfully many bright folks picked up the work and drove a good bunch of new functionality in terms of multilingual support into the new version. Let's begin!

New regional settings

Even before you enable any multilingual features, Drupal 7 comes with a new Regional settings configuration pane under Administration » Configuration » Regional and language. First day of week moved here from Date and time settings. You now have the ability to set a default country. This will not do much in itself, but contributed modules can build on the functionality. The time zone is now also set here and there were very heroic concentrated efforts to make this more intelligent. Previously if you set a timezone, you needed to revisit it twice a year when daylight savings time kicked in or was over. Instead of merely storing a time offset, Drupal 7 now stores the name of the timezone which is then used with PHP 5.2+ DateTime objects to calculate the right timezone offset at any given time. It all works like magic now.

Language support

To add language support to Drupal, you still need to enable the Locale module, like in previous versions. While Drupal core itself has language handling baked in at multiple levels, the locale module provides a user interface on top of basic language configuration with the assumption that if you need multilingual support, you probably need translated interfaces as well.

Once you enable locale module, the items Languages and Translate interface show up under Administration » Configuration » Regional and language. The role of the former is to let you configure your languages and which one should Drupal pick in given scenarios. The role of the later is to let you manage your interface translations (much more coming on that in part 2).

Let's look at language configuration first. On the outset, it looks like nothing changed from Drupal 6. You can still set up any number of languages to be supported by your site with one being the default. Out of the box, English is this default language. Languages can have native names, language codes, path prefixes and custom language domains set up. There is a similar list of built-in languages to choose from like in Drupal 6, which let you add new languages fast.

The big change here hides under the Detection and selection tab. While Drupal 6 has a fixed selection of pre-baked combinations of options to choose from in terms of how Drupal should decide on the language used, Drupal 7 totally modularized this and offers you with finer grained control in terms of which decision methods to use in which configurations, and you can even set the order of them! No more debating in issue queues over how inapplicable certain methods are to your use case, you can build your own puzzle here. The interface language can be determined based on URL information (path, domain), session data, user setting, browser preference or can fall back on the default. Contributed modules can extend on this list and implement other ways to detect language even. One can easily imagine a module providing a specific language based on your source IP (like Google does).

Interface translation changes

The interface translation features again did not change much on the outset. However, the translation user interface got some usability attention, so now it looks much more like other filter & action screens (like users, content, logs and so on). You get several filters on the top of the page that you can use and the results show underneath. Drupal 6 had an incosistent user interface approach here that is finally done away with.

The translation table highlights another subtle looking but possibly huge new feature to Drupal 7's localization system. String context support for translation. What does that mean? Well, think of the word view. What does that mean? Is it a noun? Is it a verb? Even if it is a noun, does it have one fixed meaning? Consider these uses of view:

  1. View this piece of content.
  2. Set up a new view with views.
  3. You have such a nice view from this window!
  4. I just set up a database view to speed this query up.

While Drupal could easily end up needing to translate view as a standalone word applicable to either of these situations, there was no way before to tell Drupal which situation should apply. Now in Drupal 7, a standard way to provide this was added called contexts.

Drupal core only comes with two contexts by default Font weight which is applied to Strong and Long month name which is applied to May. Again, you can imagine the words "strong" and "May" have varying meanings depending on context, and telling translators that they are used as font weight or long month name makes it possible to provide the right translation.

Unfortunately we did not define guidelines for contexts yet, given we are still into figuring out how best to use them. Several contributed modules started to use them in incosistent ways, and I'd expect the names of the contexts to still evolve and be set with discussion between translators and module maintainers. As the examples hopefully shown, contexts are not to be used to have per-module translatability for strings, but rather to have per-meaning translatability.

Translators can check the existing contexts used by all projects on (some of which look pretty broken, see screenshot on the side). issues should be tagged with string context when discussing string context issues for easy identification.

Not all APIs support contexts consistently. Menu items and strings used in Javascript will not support contexts in Drupal 7. Also, we did not step forward in supporting plural versions of strings in watchdog entries even (let alone context support for them).

Best practices of using community interface translation

Drupal 7 still builds on the well proven Gettext .po format (which also includes support for the above mentioned contexts), but how you get them changed in the past year or so. That is not just a Drupal 7 change, it also applies to older Drupal versions, but you are most probably faced with this change now. Part 2 of my series will continue by covering this topic.

Drupal Books Module

Book module: Creating structured documents

Last updated January 30, 2012. Created by Dries on April 8, 2002.
Edited by jhodgdon, ihsanfaisal, arianek, peterx. Log in to edit this page.

A book is a set of pages tied together in a hierarchical sequence, perhaps with chapters, sections, subsections, and so on. You can use books for manuals, site resource guides, Frequently Asked Questions (FAQs), or whatever you'd like.

Book module is not enabled by default. It must be enabled through Administer >> Site building >> Modules (Drupal 5 and 6) or Administration >> Modules (Drupal 7).

Users who have permission can create a book and write, review, modify, or rearrange the pages. Many users can work together on a book -- you can allow or disallow collaboration, to whatever extent you want.


Creating, modifying, and administering books

On the books administration page administer >> content >> books (Drupal 5 and 6) or Content >> Find content >> Books (Drupal 7), users with proper permission can view a list of all published books on your site. For each book there's a link to an outline, from which you can edit or delete pages or sections, change their titles, or change their weight (thus putting them in a different order). In some versions of Drupal, you can also check for orphan pages (pages that have become disconnected from the rest of the book); in other versions of Drupal, pages cannot be orphaned.

When a user creates new content of type Book page, they can add their page at the level of their choice in a book, or start a new book if they have permission. This is called defining the "parent" for a book page, and is in the "Book outline" section of the edit screen.

You also can change the position of a page in the book hierarchy later from the page edit screen, by changing the "parent" to which it belongs. Any "child" pages of the page you are editing will automatically be moved too, so if the page you are editing is a section header, this allows you to move an entire section.

On the permissions page administer >> user management >> permissions (Drupal 6) or Dashboard >> People >> Permissions (Drupal 7), you can assign users with various roles the permission to create book pages, to create new books, and to edit their own book pages or the pages of others.

You can also give permission to outline posts in books or add content to books (depending on the version of Drupal you are using). Users with this permission can take any other type of content on your site and add it to a book. When viewing content they'll see an outline tab, and by clicking it they'll come to an interface that lets them move the content into a book.

Book navigation and menus

When a visitor to your site is viewing a book page, they will automatically see links at the bottom for navigating to the previous page and the next, and a link labeled up that leads to the level above in the structure. There will also be a link to a printer-friendly version of the page at the bottom, for users with permission to view printer-friendly versions of pages.

The Book module automatically generates a contents page for each book. However, if the books on your site are complex, you may find that you need additional navigational aids beyond the table of contents and the previous/next/up links for users to understand where they are in your book. One navigational aid you can use is the book navigation block, which you can enable on the blocks page administer >> site building >> blocks (Drupal 5 and 6) or Dashboard >> Structure >> Blocks (Drupal 7). Enabling this block will turn on a menu that shows where the user is in your book; the menu is only visible when viewing the book.

Another navigational aid you can add to your site is a books link in any of your menus, which will take users to a list of your books. The books menu item is automatically part of the Navigation menu, and you can enable it from menus page administer >> site building >> menus (Drupal 5 and 6) or Dashboard >> Structure >> Menus (Drupal 7). You can also add this link to any menu you want (click "add menu item," and enter "book" in the "path" field.)

Note that the "books" link takes users to your books. The "book navigation" block helps users move around inside your books.


Here are the common operations with books. You can:

  • create a new book: create a new book page create content >> book page (Drupal 5 and 6) or content >> add content >> book page (Drupal 7) with a title for the new book, then select <create new book> in the Book Outline section, then publish the page.
  • create new book pages: create content >> book page (Drupal 5 and 6) or content >> add content >> book page (Drupal 7).
  • administer individual books (choose a published book from list): administer >> content >> books (Drupal 5 and 6) or Content >> Books (Drupal 7).
  • set workflow and other global book settings at administer >> content >> content types >> book page (Drupal 5 an 6) or Dashboard >> Structure >> Content types >> Book page >> Edit.
  • enable the book navigation block: administer >> site building >> blocks (Drupal 5 and 6) or Dashboard >> Structure >> Blocks (Drupal 7).
  • control who can create, edit, and maintain book pages at administer >> access control or administer >> user management >> permissions (Drupal 5 and 6) or Dashboard >> People >> Permissions (Drupal 7).

Confusing behaviour

If you create a new book and choose to not publish it, the book will not appear in the list of books and you will not have the option to add child pages to your first book page. Effectively you have to publish the first page to start adding child pages. If you want to create a book structure without making it public until it is edited and vetted, use a role based access module or similar to let you publish the book, so you can add child pages, but hide it from the public until ready for publication.

Child pages and child page menu entries are listed alphabetically. If you reorder the child pages using weights, the menu entries do not change to the same order. You have to separately reorder the menu entries.


Drupal Event Calendar - How to use it

Kings Grant website now sports a new event calendar
It can be accessed through the members tab.
Click on the Event Link and it will take you to content that describes the Event.

Calendar Events are entered like any other content
Add Content > Event

1. Enter the Event Title. This will become the linked title that will appear on the Event Calendar
2. In the body, describe the event, using pictures, text, and links. This will become the description that folks will see when they click on the Event Calendar link.
3. Enter the Event-Date and to date. If you do not want to show the End Date, uncheck the box.
4. Click Submit or Save, and your Event will show on the Calendar.

If you wish to change or cancel an Event on the Calendar, simple Edit the Event Content. Click Save to save changes OR, click Delete.

Drupal Menus

By default, content on a Drupal site is not automatically placed in any particular structure. When creating a node, you don't choose where on the site it should be. You create it, and then other parts of Drupal can make it appear as a subpage to a particular menu item, in a list in a particular section, or as a part of another structure.

The most direct way of bringing structure to your Drupal site is to use menus. These are links collected in a tree structure.

    Note: The initial version of this section of the Community Documentation came from the book Drupal 7: The Essentials, courtesy of NodeOne and Johan Falk

A standard installation of Drupal has four initial menus: main menu, management, navigation and user menu. More menus can be added via Drupal's interface, and you can also choose where and how they should be displayed.

Displaying Menus

There are, in principle, two ways of displaying menus:

    Each menu on the site has its own block, which can be placed in a region just like any block.
    The theme on the site can (but does not always) have two places where menus are displayed in a special format – main links and secondary links. In a standard Drupal installation, the main links are displayed as large white tabs against the blue header, while the secondary links are displayed as discrete links in the upper-right corner of the site.

Which menus should be used for main links and secondary links can be changed at the toolbar's Structure link, in the Menu and Settings tabs.

    TIP: The display of main links and secondary links only hold one level of menu links. Submenu items are not shown. It is possible to use secondary links to display subitems of the primary menu by configuring them to fetch links from the same menu. The Menu block module provides further possibilities to display selected levels and parts of a menu..

Creating and editing menu links

As with many other administration tasks, there is an overview for managing menus. It can be found by going to the toolbar and selecting first Structure and then Menus, and it displays all the menus available on your site. (See figure 4.1) Each menu presents three options.

  • List links: This gives you a list of all items in this menu, and is usually what you want to do when managing a menu.
  • Edit menu: This allows you to change the name and description of the menu itself (not the links it contains). You may also delete any menus you have created yourself.
  • Add link: This leads to a page for adding another link in the menu. See details below.

Figure 4.1: The menu overview can be found in Structure, Menus.

At the top of the list of menus is an Add menu link used for adding further menus. The only difference between menus that you create yourself and those provided by modules (or the standard installation) is that your custom menus can be deleted.

Creating menu links for nodes

Last updated June 13, 2012. Created by Itangalo on May 13, 2012.
Edited by kwseldman. Log in to edit this page.

A quick and easy alternative when creating menu links is to use the menu options available on node edit pages, under Menu settings. If the Provide a menu link option is checked, a number of new options become available (see figure 4.3). All settings are similar to the menu item configurations described in the previous section.

Figure 4.3: You can create menu links to nodes right from the node's edit form.

By default, articles and basic pages may only be placed in the Main menu. There are settings on each node type determining which menus should be available in the node edit form, as well as default settings for the parent item.

Drupal Online Business Directory by Katy

Online Business Directory:

The Fort Langley BIA (Business Improvement Association) requested that we redesign their existing business directory website built with HTML. They wanted to be able to edit the site content and add new listings themselves.  They have 100 businesses in their BIA who needed User Accounts and Business Listings. Some non-BIA members also need Business Listings.

I used Acquia Drupal bundle to set up a basic Drupal website. ( - It consists of Drupal core as well as a group of modules including Views and CCK.

A. Modules used
1. CCK
3. Content Profile (
4. User Import (
5. Node Import (
6. User Protect (

1. I used CCK to create custom content type called Business Listing

2. I used Content Profile module to create custom user registration form by assigning Business Listing content type to be the Content Profile

3. Client gave me MS Excel spreadsheet with contact info for all BIA members. I saved it as .CSV format.
- Then I used User Import module to do a bulk import and created many User accounts with attached Business Listing at one time.

4. Used Node Import module to import non-BIA members info into Business Listing. These don’t have User accounts.

5. Used Views to create Business Directory page with Taxonomy term filter exposed to add search functionality.

6. User Protect module – protects your Super User account from being deleted by mistake.  I needed to update all the User accounts. So I deleted all the current users and ended up deleting my Super User account by mistake. So the User Protect module will prevent this from happening again.

B. How to:
1. Set up Business Listing content type with these fields:
First name, last name, address, phone, fax, email, website, hours of operation, description, image.

2. Go to settings for Business Listing
Enable: Use this content type as a content profile for users
Click on Content Profile tab and enable these two items:
- Use on Registration
- Use on administrative user creation form

3. Set up Taxonomy with the 14 business categories as terms.
- Add vocabulary called Business Categories
- Add terms eg. Accommodations, Attractions: General, Attractions: Historic

4. Set up BIA member role and permissions

5. Set up User Import module to import list of BIA member contact info in .CSV format and set up multiple user accounts. You have the options of turning off the ‘Notify User about account” and update user accounts.
- set First + Last name with space in between as User Name
- map the fields in your spreadsheet to the fields of Business Listing
- you can assign BIA member role to all Users
- you can customize the Welcome email message here or Notify Users of new account later on after training

6. If you look at the Business Listing node, you will see the User Name as the node title.  we want the Company Name to be seen as the Title.  So I needed to edit the Title field and replace the User name with Company name. I need to go in and edit each Business Listing anyways.

When you view the User Account, you see the User Name as title
Now, when you view the Biz Listing, then you see the Company Name as the title.

7. Edit each Business Listing to add image, hours, description, replace title with company name and choose the business category.

8. Not all Business Listings are BIA members so import them using Node Import module. These businesses will not have User account. With the Node Import module you can map the Company name to be the node Title.

9. Set up View to display Content type Business listing.
- Expose the filter for taxonomy term to use for searching the directory
- Company name – link to node

1. I used CCK to create custom content type called Business Listing

2. I used Content Profile module to create custom user registration form by assigning Business Listing to be the Content Profile

3. Client gave me with MS Excel spreadsheet with contact info for all BIA members. I saved it in .CSV format.
- Then I used User Import module to do a bulk import and create many User accounts with attached Business Listing at the same time.

4. Used Node Import module to import non-BIA members info into Business Listing. These don’t have User accounts.

5. Used Views to create Business Directory page with Taxonomy term filter exposed to add search functionality.

6. User Protect module – protects your Super User account from being deleted by mistake


Thanks a lot for sharing valuable information

Thanks a lot for sharing valuable information

Thanks a lot for sharing. You

Thanks a lot for sharing. You have done a brilliant job. Your article is truly relevant to my study at this moment, and I am really happy I discovered your website. However, I would like to see more details about this topic.

Only one Problem with Content Profiles

There is just one problem with content profiles, and I am finding it out late in the game: You cannot restrict access to content profile, I am trying to set up a paid directory, and, while I can create a system that restricts access to the ability to of a role create a node content type (which makes it nicely integrate with ubercart's role access node), you cannot restrict access to the content profiles.

Content profiles is also difficult to theme, you do not get a standard list of arrays as you would when you do a print_r with a more traditional node.

Maybe I missed something big, but I'm not sure I did.

Not to take away from your write-up, which is excellent. Didn't know about User Protect module -- good one!

Thanks much.

Try using Node Access module

You can try using Node Access module to restrict access to content by node type. Let me know how it works out.

Thanks for the info on how

Thanks for the info on how you did this. I too and wanting to create a business directory that includes both members and non-member users. One question though, if you use Content Profile for the Business Listing type, wouldn't that need to be associated with a registered user? You mention that you created user accounts for members with linked Business Listings and also created Business Listings for non-members but since Content Profile is used for user account profiles, wouldn't you have needed to create accounts for non-members as well? I would like to be able to create all the listing myself as an administrator but then my account would be associated with each Business Listing and I believe Content Profile only allows one profile per user unless I'm wrong so how did you create Business Listings for non-members? Thanks for any help!

Thanks for posting your question on my blog

Thanks for posting your question on my blog. I used the Node Import module for Drupal 6 to import non-members info into the Business Listing content type. These are non-members who don’t have User accounts.   The Node Import module is used to do a bulk import of business listings from a MS Excell spreadsheet or CSV file.

To add individual non-member Business Listings, go to Content Management > Create Content > Business Listing and create a new node for the Business Listing content type.


Drupal Organic Groups Basics

The organic groups module allows you to create a working group where a number of people with shared interests can create content which is non-public, visible to group members only.

To be able to create and moderate a group, you need to get 'group moderator' permissions. To get this you need to ask the Administrator.

To create a new group you first need to create a new content type.

In the navigation bar under your name on the left side bar, which I call the admin menu, go to 'content type', then select 'add content type' choose a name and a type (same as name but machine readable), tick 'published' but not 'promoted to front page'.

Under 'organic groups usage' tick 'Group Node', this is mandatory, everything else select to your liking and save content type.

Then go to 'create content', you will find your new content type there, select it and everything else is pretty self explanatory, by creating the first content node of this type you set up the group. Select
'private' maybe and 'moderated', do not offer it for 'selection on registration'

You can decide if your group should be listed on the 'groups' homepage. I would suggest to say 'yes' to that.

Now, to create content for your group, click on groups, and then on your group. Once you are 'inside' your group, there is a new 'groups' menu on the last sidebar; you will probably have to scroll down a little bit to get there. Select one of the content types offered there.

You can then decide with each article if you would like to publish it only in your group or visible to all.

Here you go. Enjoy group life;)

note of caution: as a group moderator you are given quite a few rights. please use those rights with caution.

There is some more information here:

Drupal Tuning For Performance

Get the best out of your Drupal site with these speed tips

(Tim Millwood .net) Drupal is not known as the most performance application, neither is the PHP language it is written on, but there are lots of things you can do to increase the performance of your Drupal site. This article will touch on many of those methods covering key modules, configuring these modules and setting up other applications to aid your Drupal site.

1. Planning

One of the main things that will effect the performance of your Drupal site is bloat. There are many modules available, and you may want to install them all, but don’t. Plan ahead, look at what each modules does, try see how well they work and if they offer what you want. Most modules offer an ‘uninstall’ function, but not all do, so when testing a module try it on a dev environment so you can delete it all and start again if needed. This will prevent data in the database that is not needed.

2. Updates

Updates to Drupal core and contributed modules are released very often, these updates can sometimes include performance improvements, so keeping them up to date is vital. Clearly updates need to carry some level of caution, testing these updates before pushing them live is always a must as you never know what features could have changed, been removed or broken.

Check the updates page in your Drupal site to stay up to date
Check the updates page in your Drupal site to stay up to date

3. Pressflow

Drupal 6 had many issues some of which prevented the use of some third party performance tools such as Varnish reverse proxy cache. These issues were all resolved in a distribution of Drupal called “Pressflow”. Therefore I would recommend anyone who uses Drupal 6 now to look at upgrading to Pressflow. All of the changes, which were made to Drupal 6 for Pressflow, have now been worked on by the Drupal community as a whole and added to Drupal 7. If you are looking to build a new site using Drupal, then Drupal 7 should be your version of choice.

If using Drupal make sure it's the Pressflow distribution (picture by

4. APC

APC (Alternative PHP Cache) is a PHP OP code cache. It is a very quick win when working with PHP and can offer a great performance boost when using Drupal. It is very much a “set it and forget it” type of application which can just be installed, enabled and left to do it’s thing. Many Drupal specific hosting companies will already have APC setup and running so you may even be using it without noticing.

5. Memcache

Drupal’s support for Memcache is really good across Drupal 6 and Drupal 7, so even if you have an older site this can still offer you a boost. Drupal has a fantastic hook-able caching system, where any module can write to a standard cache table, or create a cache table, then use a specific API to write to these cache tables. When using these cache tables it can save large complex PHP tasks or MySQL queries, but it can also create more slow queries for reading and writing the cache. Memcache relieves that problem by storing all of these cache tables in memory. For many sites these reduces load on the server and increases the performance of the site.

6. Varnish

When you have a lot of anonymous users reverse proxy cache can save you a lot of server load. Varnish is one of the more popular solutions within the Drupal world. Varnish sits in front of your web server application, for example Apache, nginx or lighttpd, and can run on the same server or a remote server. It is often run on a load balancer in front of multiple web servers. Varnish will cache pages for anonymous users, for as long as the “max_age” header is set. Varnish can be quiet complex to setup, the there are many Drupal focused tutorials. It’s advised to configure it to only bypass the cache for users with a cookie starting with “SESS” as these are given to authenticated Drupal users, but any module that sets “$_SESSION” in it’s code will also set one of these cookies in Drupal, which will cause Varnish to be bypassed, and extra load to be added to the web server. Also note that when a cached page is served from Varnish, no PHP code will get executed within Drupal, therefore things such as mobile detection, or geoip detection will not function.

7. Boost

When you are on an environment that won’t allow you to use Varnish, such as shared hosting, Boost will work as a great alternative. Boost is a Drupal module that caches all of the pages, for anonymous users, to flat files. When the page is then requested it is loaded a lot quicker, because it is coming straight from the disk and no PHP or MySQL processing is needed. Boost does not work as well on distributed or cloud environments which use network file systems, as the reads and writes on these can be a lot slower and cause issues.

8. CDN

Drupal is not the only web application that can benefit from a CDN. A CDN is used to distribute static assets such as images, documents, CSS and JavaScript across many locations, so can be useful if you target an international audience. If you only target a more local audience then serving your static assets from Varnish may actually work out faster.


The Views module is one of the best modules ever written for Drupal, but can often end up generating very slow database queries. When optimising Views many of the same rules apply as optimising database queries. In the Views interface when you “Preview” a view it will show you the query it’s generating and from there it may be clearer what is going on under the hood. Firstly I would advise using InnoDB in MySQL instead of MyISAM, this offers a great performance boost. I would then look at ways not to use “distinct” and “count” in your queries. When sorting by date, make sure the granularity is set to seconds. Try a few different settings within Views, and a few different versions of the queries to see which ones load faster, you may be able to get much better performance by only slightly compromising on functionality.

Views lite pager

The Views lite pager module is only of the biggest performance boosts I have seen for the Views module. The standard pagers within Views add, first, previous, page number, next and last links. To generate these they need another database query using the MySQL count function to find out how many pages there are. When using InnoDB the count function is so slow that it can easily take down large sites. The Views lite pager removed the need for a count function in the query by just adding a previous and next link for the pager. This therefore losses a small amount of functionality, but the performance boost is incredible.

Views cache

Views has it’s own caching system which allows you to set a time for how long each view should get cached for both anonymous and authenticated users. This is stored in the Views cache tables unless you are using Memcache, when it will be stored in memory. Normally when a page is requested with a view on (or several views on) a database query is done to load the data for that view. If you have one thousand people requesting that page over a few minutes those views will execute thousands of database queries, which could cause quite a performance hit. If you were to set the Views cache to 5 minutes on each view then this database query will only be run once every 5 minutes, no matter how many times it’s requested. The downside here is that if you are updating the site with new content, it won’t be displayed for 5 minutes, but that’s a small price to pay for the performance. If you have views with content that doesn’t change very often you could set the cache time to much longer.

This module is infinitely configurable, and there are a few ways to make it perform really well
This module is infinitely configurable, and there are a few ways to make it perform really well

10. Block cache

Drupal’s block cache can offer a great performance boost for anonymous and authenticated users, especially when used with Memcache. In views when generating a block it is possible to select if the block should make use of block cache or not. Make sure you enable this and select a setting that seems sensible for the type of data you’re displaying. For example, if you are a listing in Views all posts by the current logged in user you would want to cache it per users, so people don’t end up seeing other’s content. Or if you are listed related articles to the current page, you would need to cache this per page, to prevent non-related articles being displayed.

11. File system optimisations

When you start running a very large site you may find a cloud hosting solution would suit you better because it offers great flexibility, instant scaling etc, but these systems don’t work the same as a standard hosting environment. In a cloud environment you would usually need to store your files (images, documents, css, JavaScript, etc) in network attached storage to allow it to be kept in sync, and made available to all web servers on your environment. These file systems (like many other types of file systems) don’t handle many files in a single directory. When you have a few million images in your site by default Drupal would just put these in sites/default/files, when trying to access one of these million images in the single files directory the file system could lock up and cause a performance issue, or worse, could cause the system to crash. If you were to put all of your files into date ordered folders, for example sites/default/files/YYYY/MM/DD, you would be able to get around this issue. A great module to help with this is File(field) Paths, it allows you to set “tokens” for your file paths, and split up files into different folders.

12. Fast 404

All sites get “404 page not found” errors, although it is more common when you are launching a new site and paths to pages and images have changes. When loading a 404 page in Drupal it has to do a full “bootstrap”, load all modules, load settings, etc. If there were a few images missing on the page, this could end up with hundreds of megabytes of memory being used on the server, which doesn’t need to be used. The fast 404 module allows a very simple 404 page to be loaded which uses very little memory. Drupal 7 has a little bit of this functionality in core, but the fast 404 module offers a lot more. Missing images, and 404 errors are not something that should be ignored because I have seen this issue cause sites to fail.

13. Bad modules

Drupal core ships with some great modules but it also ships with some nasty ones. Here are 3 of the worst:

Database logging (dblog)

The database logging modules writes all log messages to the database, when you have many errors, debugging information or modules that writes other log messages this can end up being many database inserts per page load. This then puts extra strain on your database server and cause performance issues. The recommendation here is to disable the database logging module and use the syslog module instead. Syslog also ships with Drupal core, but writes to the server log file, this will offer similar functionality at a fraction of the resources.


The Statistics module is used to could how many times content has been viewed as well as collecting other data about user’s activity on the site, much like Google Analytics. This can cause multiple database writes per page load for both anonymous and authenticated users, which added unwanted load on the database. Also if using reverse proxy caching such as Varnish, statistics will not return accurate data. As the maintainer for the statistics module in Drupal core I am working to resolve these issues, and hope to have it solved in Drupal 8, and possibly rolled back to Drupal7, until then I would suggest using Google Analytics. The Google Analytics Reports module uses the API to fetch information from Google and make use of it in your site.

PHP filter

The PHP filter module allows adding PHP code to content (nodes) and to blocks. This PHP code is stored in the database so when executed Drupal has to first load the code from the database before executing it. As you can imagine, this would be slower than just having the code in a file as a Drupal module. What makes it worse is that when the PHP filter module is used, none of the code executed gets cached. So please, put all of your code into custom modules.

14. Performance monitoring

Different sites have different problem areas, there are a few ways to monitor the site performance during development, during load testing and during live usage. Drupal Devel module offers a few features such as listed the database queries and time they took, as well as returning the memory usage for loading the page. These will be able to tell you what areas of the site should be optimised. This can easily be run during the development process.

New Relic on the other hand is a third party service that works well with Drupal, it runs on the server and logs the speed of queries and functions on the site. When running a load test it is possible to monitor New Relic to get vital information that will help improve the site. New Relic also works well on live sites, so even after launch you can continue to monitor for issues, and tips on improving the site.

15. Frontend

Like all sites the perception of performance by the user is often governed by the frontend and not the backend. If the web page has a lot of assets, or these assets load slowly then the server will be nice and happy, it will keep running forever, but the user will think the site is incredibly slow. Drupal allows the aggregation of CSS and JavaScript files. This allows you to convert the tens of CSS and Javascript files your modules and themes load, into just a few files.

Drupal is also not really well know for it’s greatest HTML, it is more well known for it’s flexibility. Therefore altering the HTML to only have the structure you need for your site will reduce the page site, and improve the load time.

Make sure you are using imagecache in Drupal 6 or image profiles in Drupal 7 to reduce the size of user uploaded images.

Read More - Click Here!

Drupal Webform

The Webforms feature allows you to collect information from anonymous site visitors. Uses for this feature include surveys, multi-page questionnaires, polls, event registration forms, and lead generation pages.

Forms features a basic analysis of the results collected. If this is insufficient, you can download the data off your site for further analysis and use in any spreadsheet.

For information about viewing the results of your webforms, see Webform results.

Webform typically is used when performing data collection that is a one-way communication, that is, many users submitting values to a very small set of administrators. Webform is not a front end for letting users create custom nodes. Webform submissions are not nodes. This means that you can't use Views with submissions, setup custom access permissions to submissions, or do just about anything with them outside of what Webform provides for you.

Webform Components

Webform components are basically the equivalent of CCK fields. You can add any number of fields to a node that an end-user can fill out. All components are included with the Webform module. These include:

  • date
  • email
  • fieldset
  • file
  • grid
  • hidden
  • markup
  • pagebreak
  • select
  • textarea
  • textfield
  • time

To read how to make checkboxes, radio buttons/groups, and menus/lists then read the Webform Field Types portion of this doc.

Creating webforms

To create a new webform:

  1. In the shortcut bar, select Add content, and then click the Webform link.

    Create Webform page

  2. Enter a Title for the webform, such as Survey.
  3. Construct the webform by either dragging fields from the Fields list on the left into the editing area on the right.

    Adding fields to the webform

    Adding the first field into the editing area automatically adds a Submit button to the webform.

    You can rearrange the fields at any time during the webform creation process.

  4. In the editing area, click on each field to view the fields' editable attributes on the left. For more information about the different fields and their attributes, see the Customizing fields section of this page.

    Editing field settings

  5. Use the vertical tabs at the bottom of the page to set additional attributes for your webform, including menu links, a custom URL, and other special settings. For more information, see the Customizing the webform section of this page.
  6. Click Publish.

Drupal Gardens creates the webform based on your settings. To view the created webform, in the admin menu, select Find content.

edit tab.

Customizing fields

Each field that you select for your webform has additional attributes that you can modify in order to ensure that you're collecting the required information from the person filling out the form.

When you select a displayed field in the webform in order to edit it, the field's attributes appear on the left of the page in the Field settings tab. For each field, the Field settings tab contains an accordion view of that field's Properties, Display, and Validation attributes.

Text field
A single-line text entry field.

Multi-line text field
A multi-line text entry field.

Radio buttons
Obtains a single choice from a list of items.

Check boxes
Obtains one or more choices from a list of items.

Drop-down list
Obtains a single choice from a list of items.

Only accepts valid e-mail addresses as input.

File upload
Allow visitors to include files as part of their submission. You can use this for users to include photographs, music, documentation, resumes and more.

Page break
Insert one or more page breaks to create a multi-page form. This can help you keep question pages short and simple.

Formatted content
Enter explanatory texts, instructions, images and so on. To format and organize the text, select a text format and use the HTML formatting it supports.

Organize fields into groups on the page. Simply drag a fieldset into place, then drag one or more fields into it.

Hidden field
This field and its contents are not visible to your site visitors. You can use it to label results from different forms, different versions of the same form, add reminders to site admins about processing deadlines or anything else you need. Setting a default value will return it with the rest of the results submitted by your site visitors, such as a survey version number.

Customizing the webform

The panels at the bottom of the webform page include several customization options for the webform.

The Form settings panel provides customization options specific to webforms, including how they handle user data submissions, access, and other advanced settings.

Form settings panel

For information about the other available panels on the page, see Common content settings.

Submission settings

  • Customize confirmation check box - Configure your website's actions when a user submits a response to this webform.

    Select from the following form submit options in the drop-down menu:

    • Show standard confirmation page - Displays a confirmation page that contains the information you enter in the Page body field.
    • Redirect to a different page - Redirects users to the page in the Path field. You can also display a confirmation message by selecting the Show a confirmation message check box, and then entering information in the Page body field.
    • Stay on the same page - Users stay on the webform page. You can also display a confirmation message by selecting the Show a confirmation message check box, and then entering information in the Page body field.
  • Enable spam protection (Mollom) check box - Use Mollom to protect webform comments from spam.

    For more information about Mollom, see Mollom. This check box is enabled by default.

  • Limit submissions check box - Limit how often visitors can submit the webform to protect against spam.

    When you select this check box, additional settings appear which allow you to set a visitor's number of allowed submissions for a period of time.

  • Send a confirmation e-mail check box - Send a notification email to an administrative account (that you select) for each submitted webform.

    When you select this check box, additional fields appear for the email message:

    • To - The email address to which Drupal Gardens sends notification emails. This field only supports a single email address. To send notification emails to multiple recipients, create a mailing list on your email server that contains all of the required recipients.
    • Subject - The subject of the notification email.
    • Body - The formatted text of the notification email. You can also add tokens to the email to send visitor information and webform results. For more information, see the Using tokens with notification emails section of this page.

Submission access

Use this section to set which user roles can access and submit your form. For more information about user roles, see User roles and permissions.

Submission access section

Advanced settings

  • Create a block - Your forms can appear in blocks as well as on their own pages. Select this option, save your form, then go to the Blocks page to enable and configure its block. For more information, see Blocks .
  • Show complete form in teaser - If your form is displayed on your front page or blog page as a teaser, depending on your settings, it might be cut off after a certain number of lines or characters. Select this option to prevent this from happening.
  • Display a link to previous submissions - This option displays a link to the previous submission by visitors who have already submitted your form.

Using tokens with notification emails

Tokens allow you to configure the notification email for each webform submission to include information including the user's IP address, the date/time of submission, the user's email, and more. Your notification emails can even include values from the completed form.

To add tokens to a notification email, in the Body field, enter tokens from the Token values section. For example, if you want the notification email to include all of the visitor's webform results, include the %email_valuesin the Body field.

Token use in notification email

Note: Several tokens include a key, which allows you to obtain token information from a field on a webform. Each field on a webform has a Machine name, which is displayed under the Label field in Field settings > Properties.

Machine name

For example, if you want to display the formatted field label and value for a specific webform field, use the %email[key]token, and replace keywith the field's machine name (example, %email[new_1334599733011]).


The THEMING.txt file included with the module package has guidelines for theming with instructions on how to customize submitted e-mails, confirmation pages, and the display of the form itself.

Dump Libre Office Install Open Office

Apache OpenOffice 4.0 Released–Here’s How To Install It In Ubuntu

Apache OpenOffice suite which was previously known as has just been updated to version 4.0 and is available for download from its official download page. This is a major update and brings exciting new features and enhancements along with many bug fixes.

Some of the features released with this version are:  a new sidebar, improvements in Microsoft Office interoperability, support for more languages and a major performance boost. For more about this release and all of the other features included, check out its release notes page.



This brief tutorial is going to show you how to easily install or upgrade to the latest OpenOffice version in Ubuntu 13.04 and previous versions. You can use it in place of LibreOffice if you want. I am not recommending it, but it’s up to you to do what you want with your computer.

I am not going to tell you to pick a side between LibreOffice and OpenOffice. All I am going to show you is how to install AOO in Ubuntu and use it.

To get started, press Ctrl – Alt – To on your keyboard to open the terminal console. When it opens, run the commands below to completely remove LibreOffice from your machine. It’s wise to remove LibreOffice before installing OpenOffice. Don’t worry, I will also show you how to revert the changes you made to your computer after installing OpenOffice.

sudo apt-get remove --purge libreoffice* libexttextcat-data* && sudo apt-get autoremove




Next, change into the /tmp directory to download OpenOffice file.

cd /tmp




When you’re there, run the commands below to download the latest version (32-bit English) of OpenOffice. To view the download page and select other languages, use this page.





For 64-bit English version, use this link.





When the file is downloaded, run the commands below to extract the downloaded file.

tar -xvf Apache_OpenOffice*.tar.gz




Next, run the commands below to begin installing it.

sudo dpkg -i en-US/DEBS/*.deb




Next, run the commands below to install the desktop-integration for .deb Linux distributions.

sudo dpkg -i en-US/DEBS/desktop-integration/*.deb




When you’re done, restart and enjoy!




To revert the change and reinstall LibreOffice, run the commands below to completely remove Apache OpenOffice.

sudo apt-get purge openoffice*.* && sudo apt-get autoremove




Then install LibreOffice by running the commands below.

sudo apt-get install libreoffice libreoffice-gnome





Dump Microsoft Office Get LibreOffice Free

(If you take a close look at Microsoft's new Office licensing, it's crystal clear: Microsoft no longer wants you to own your office software. They want you to rent it. So, why not get LibreOffice for free instead?

LibreOffice for free, or MS-Office for a perpetual annual fee or a higher one-time price and locked to a single PC. It's your choice.

You don't have to believe me, the open-source, Linux guy. I quote Ed Bott, ZDNet's Microsoft maven, "You can no longer buy Office, Microsoft’s flagship product, on removable media. You can’t even download offline installer files for the three retail editions of Office: Home and Student, Home and Business, and Professional."

But, wait, there's more, much more. "Multi-PC editions are no longer available," and "Your perpetual license is locked to one PC." Your PC goes up in smoke? Too bad, you can't legally or physically reinstall "your" copy of Office on another PC.

Why is Microsoft doing this? Well, as Bott explained in an earlier article, Microsoft is applying the classic 'carrot and stick' approach to force you to rent Microsoft Office instead of buying it. The bottom line is it will cost you more to buy Office and you'll get less for your money than if you subscribe to Office annually. That's great for Microsoft. Lousy for you and your company. 

Call me old-fashioned, but I like "owning" my software. I like picking and choosing where I can install it and how I use it. And, also call me sensible. I can pay $150 a year for Office 365 Small Business Premium forever and a day or I can use LibreOffice for free forever and use it anyway and anywhere I want.

Sure, they're not the same thing. Office 365 Small Business Premium comes with Word, Excel, PowerPoint, OneNote, Outlook, and Access. LibreOffice 4.0 comes with Writer (Word); Calc (Excel); Impress (PowerPoint); and Base (Access). LibreOffice doesn't have equivalents to OneNote or Outlook. On the other hand, you can always use Thunderbird instead of Outlook and LibreOffice includes Draw, a graphics program.

So, unless your business depends on OneNote, which is just fancy note-taking software, I don't see any good reasons to making MS-Office a perpetual part of your IT budget. Besides, if note-taking really is a big deal for you, may I suggest Evernote instead? 

Document format compatibility between the two office suites remains an issue, but it's much less of one than it used to be. Microsoft has gotten better at working with LibreOffice's native Open Document Format (ODF) and LibreOffice has gotten the hang of working with Microsoft's OpenXML format.

To me, it all comes down to whether you want to be a renter or a "buyer." When the cost of buying is zero, I think anyone who can shake themselves from the delusion that they must use Microsoft Office because they always have will know which is the wisest course.

Related Stories:

Topics: Enterprise Software, Microsoft, Open Source, Software

Read More - Click Here!

Enterprise Taking The Open Source Plunge

Taking the Open Source Enterprise Plunge

(Jay Lyman @ LinuxInsider) Devops represents a dramatic change from the old siloed developers and script-heavy system administrators of yesterday. Any tools that can provide some common ground for developers and IT operations professionals can help, and it seems Chef and Puppet often do.

Server provisioning and configuration management and automation are the latest examples of where the tech industry is being driven, largely by open source software. The leading open source server and IT infrastructure automation frameworks, Opscode Chef and Puppet Labs' Puppet, sit on the leading edge of significant trends under way in enterprise IT -- particularly disruption from cloud computing and devops, where application development and IT operations come together for faster, smoother delivery of software and services.

I've discussed the importance of open source software in cloud computing and in trends such as devops and polyglot programming. Consistently across all of these trends and the technologies that go with them, there are prominent roles for Chef and Puppet.

Chef and Puppet are a typical starting point for organizations seeking more modern, more automated systems management, particularly when infrastructure is represented by traditional data centers, virtual, private and public cloud resources. These open source software tools are used by organizations to more quickly and efficiently provision, configure and manage clusters of servers.



At the Core of Devops

Much of the efficiency and automation they provide lies in Chef and Puppet recipes and cookbooks, which are manifests or blueprints of infrastructure and application configurations that can be reused, as well as tracked and refined. This reduces the time and trouble of provisioning and configuring each server or cluster from scratch.

It's common for large enterprise customers to indicate that Chef and Puppet must be able to integrate or work with other technologies in continuous integration and continuous deployment initiatives, which also represent devops implementations. Thus, we see not only customers, but also providers facing a decision of whether to integrate and support Chef and Puppet, or provide similar server configuration and provisioning capabilities.

Chef and Puppet are important for a few reasons. First, their technologies and communities are a core part of the devops trend that joins application development and IT operations efforts for greater speed and agility, improved efficiency and quality.

In addition, these open source tools can serve as standards in the absence of real standards -- an increasingly significant challenge with the polyglot programming trend that translates to more languages, frameworks, databases, tools, infrastructure and general variety in developing, deploying and managing today's applications.

While they may be the leaders, Chef and Puppet (both written in the Ruby programming language) are not the only open source options in the market. There's CFEngine, a server automation framework that is written in C. SALT, a similar framework written in Python, is another option. Juju, from Ubuntu Linux distributor Canonical, is yet another similar toolset in the market.

Slipping Into the Mainstream

The community growth of tools such as Puppet and Chef -- evidenced in part by efforts such as Amazon's new OpsWorks, as well as the commercial growth of Opscode, Puppet Labs and other vendors in the space -- is indicative of the devops' extension beyond Web 2.0 and technology firms to more mainstream enterprise verticals such as financial services, telecommnications, retail, pharma and health, and the public sector.

These large enterprise organizations are piloting and expanding devops implementations as they seek to respond to much faster software iteration cycles, demanding consumers, and internal users and other open source, free or inexpensive options for enterprise developers that lead to so-called "shadow IT operations" -- not something the ops team wants to hear about.

Further evidence of these tools and practices going mainstream lies in expanded integration and support for Windows management and Microsoft environments, which represent a growing number of customers for CFEngine, Opscode and Puppet Labs.

In true enterprise open source form, these tools and enterprise use of them are forcing a response from large, traditional and mainly proprietary systems management vendors. Some of these players, which include HP, IBM, BMC and CA, are responding with their own integration and support for Chef, Puppet and other tools -- but they are also extending to serve devops customers the way they have always responded to disruption: by acquiring companies with the key technologies.

Chef sponsor Opscode and Puppet sponsor Puppet Labs are indeed among the most interesting potential M&A targets in the tech industry today, but both companies are more focused on growing commercial business and community, which seems to be sustaining their success.

While the industry typically asks which player will win in such a scenario, I would argue -- as I did with Xen and KVM -- that both projects, both companies, both customers and providers, and open source in general all benefit from not just one, but two credible options for enterprise technology and capability.

It's clear there is some opportunity in the management of virtualized infrastructure, since two-thirds of companies in the market are either in planning or in pilot, or do not yet have solid plans for server provisioning and configuration technology, based on 451 Research's customer research via TheInfoPro.

Given what we hear from customers and vendors and some of the alignments taking place, such as VMware's recent investment in Puppet Labs, it seems clear Chef, Puppet and open source software will continue to play a prominent role as more organizations take on managing and automating their virtual and cloud infrastructure.

LinuxInsider columnist Jay Lyman is a senior analyst for 451 Research, covering open source software and focusing primarily on Linux operating systems and vendors, open source software in the enterprise, application development, systems management and cloud computing. Lyman has been a speaker at numerous industry events, including the Open Source Business Conference, OSCON, Linux Plumber's Conference and Open Source World/Linux World, on topics such as Linux and open source in cloud computing, mobile software, and the impact of economic conditions and customer perspectives on open source. Follow his blog here.

Ernie Ball Rockin' without Microsoft

By David Becker – c/net news

Sterling Ball, a jovial, plain-talking businessman, is CEO of Ernie Ball, the world's leading maker of premium guitar strings endorsed by generations of artists ranging from the likes of Eric Clapton to the dudes from Metallica.

But since jettisoning all of Microsoft products three years ago, Ernie Ball has also gained notoriety as a company that dumped most of its proprietary software--and still lived to tell the tale.

In 2000, the Business Software Alliance conducted a raid and subsequent audit at the San Luis Obispo, Calif.-based company that turned up a few dozen unlicensed copies of programs. Ball settled for $65,000, plus $35,000 in legal fees. But by then, the BSA, a trade group that helps enforce copyrights and licensing provisions for major business software makers, had put the company on the evening news and featured it in regional ads warning other businesses to monitor their software licenses.

Humiliated by the experience, Ball told his IT department he wanted Microsoft products out of his business within six months. "I said, 'I don't care if we have to buy 10,000 abacuses,'" recalled Ball, who recently addressed the LinuxWorld trade show. "We won't do business with someone who treats us poorly."

Ball's IT crew settled on a potpourri of open-source software--Red Hat's version of Linux, the OpenOffice office suite, Mozilla's Web browser--plus a few proprietary applications that couldn't be duplicated by open source. Ball, whose father, Ernie, founded the company, says the transition was a breeze, and since then he's been happy to extol the virtues of open-source software to anyone who asks. He spoke with CNET about his experience.

Q: Can you start by giving us a brief rundown of how you became an open-source advocate?

A: I became an open-source guy because we're a privately owned company, a family business that's been around for 30 years, making products and being a good member of society. We've never been sued, never had any problems paying our bills. And one day I got a call that there were armed marshals at my door talking about software license compliance...I thought I was OK; I buy computers with licensed software. But my lawyer told me it could be pretty bad.

The BSA had a program back then called "Nail Your Boss," where they encouraged disgruntled employees to report on their company...and that's what happened to us. Anyways, they basically shut us down...We were out of compliance I figure by about 8 percent (out of 72 desktops).

How did that happen?

We pass our old computers down. The guys in engineering need a new PC, so they get one and we pass theirs on to somebody doing clerical work. Well, if you don't wipe the hard drive on that PC, that's a violation. Even if they can tell a piece of software isn't being used, it's still a violation if it's on that hard drive. What I really thought is that you ought to treat people the way you want to be treated. I couldn't treat a customer the way Microsoft dealt with me...I went from being a pro-Microsoft guy to instantly being an anti-Microsoft guy.

Did you want to settle?

Never, never. That's the difference between the way an employee and an owner thinks. They attacked my family's name and came into my community and made us look bad. There was never an instance of me wanting to give in. I would have loved to have fought it. But when (the BSA) went to Congress to get their powers, part of what they got is that I automatically have to pay their legal fees from day one. That's why nobody's ever challenged them--they can't afford it. My attorney said it was going to cost our side a quarter million dollars to fight them, and since you're paying their side, too, figure at least half a million. It's not worth it. You pay the fine and get on with your business. What most people do is get terrified and pay their license and continue to pay their licenses. And they do that no matter what the license program turns into.

What happened after the auditors showed up?

It was just negotiation between lawyers back and forth. And while that was going on, that's when I vowed I was never going to use another one of their products. But I've got to tell you, I couldn't have built my business without Microsoft, so I thank them. Now that I'm not so bitter, I'm glad I'm in the position I'm in. They made that possible, and I thank them.

So it was the publicity more than the audit itself that got you riled?

Nobody likes to be made an example of, but especially in the name of commerce. They were using me to sell software, and I just didn't think that was right. Call me first if you think we have a compliance issue. Let's do a voluntary audit and see what's there. They went right for the gut...I think it was because it was a new (geographical) area for them, and we're the No. 1 manufacturer in the county, so why not go after us?

So what did swearing off Microsoft entail?

We looked at all the alternatives. We looked at Apple, but that's owned in part by Microsoft. (Editor's note: Microsoft invested $150 million in Apple in 1997.) We just looked around. We looked at Sun's Sun Ray systems. We looked at a lot of things. And it just came back to Linux, and Red Hat in particular, was a good solution.

So what kind of Linux setup do you have?

You know what, I'm not the IT guy. I make the business decisions. All I know is we're running Red Hat with Open Office and Mozilla and Evolution and the basic stuff.

I know I saved $80,000 right away by going to open source.

We were creating the cocktail that people are guzzling down today, but we had to find it and put it together on our own. It's so funny--in three and half years, we went from being these idiots that were thinking emotionally rather than now we're smart and talking to tech guys. I know I saved $80,000 right away by going to open source, and each time something like (Windows) XP comes along, I save even more money because I don't have to buy new equipment to run the software. One of the great things is that we're able to run a poor man's thin client by using old computers we weren't using before because it couldn't handle Windows 2000. They work fine with the software we have now.

How has the transition gone?

It's the funniest thing--we're using it for e-mail client/server, spreadsheets and word processing. It's like working in Windows. One of the analysts said it costs $1,250 per person to change over to open source. It wasn't anywhere near that for us. I'm reluctant to give actual numbers. I can give any number I want to support my position, and so can the other guy. But I'll tell you, I'm not paying any per-seat license. I'm not buying any new computers. When we need something, we have white box systems we put together ourselves. It doesn't need to be much of a system for most of what we do.

But there's a real argument now about total cost of ownership, once you start adding up service, support, etc.

What support? I'm not making calls to Red Hat; I don't need to. I think that's propaganda...What about the cost of dealing with a virus? We don't have 'em. How about when we do have a problem, you don't have to send some guy to a corner of the building to find out what's going on--he never leaves his desk, because everything's server-based. There's no doubt that what I'm doing is cheaper to operate. The analyst guys can say whatever they want.

The other thing is that if you look at productivity. If you put a bunch of stuff on people's desktops they don't need to do their job, chances are they're going to use it. I don't have that problem. If all you need is word processing, that's all you're going to have on your desktop, a word processor. It's not going to have Paint or PowerPoint. I tell you what, our hits to eBay went down greatly when not everybody had a Web browser. For somebody whose job is filling out forms all day, invoicing and exporting, why do they need a Web browser? The idea that if you have 2,000 terminals they all have to have a Web browser, that's crazy. It just creates distractions.

Have you heard anything from Microsoft since you started speaking out about them?

I got an apology today from a wants-to-be-anonymous Microsoft employee who heard me talk. He asked me if anyone ever apologized, because what happened to me sounded pretty rough to him, and I told him no. He said, "Well, I am. But we're nice guys." I'm sure they are. When a machine gets too big, it doesn't know when it's stepping on ants. But every once in a while, you step on a red ant.

Ernie Ball is pretty much known as a musician's buddy. How does it feel to be a technology guru, as well?

The myth has been built so big that you can't survive without Microsoft.

I think it's great for me to be a technology influence. It shows how ridiculous it is that I can get press because I switched to OpenOffice. And the reason why is because the myth has been built so big that you can't survive without Microsoft, so that somebody who does get by without Microsoft is a story.

It's just software. You have to figure out what you need to do within your organization and then get the right stuff for that. And we're not a backwards organization. We're progressive; we've won communications and design awards...The fact that I'm not sending my e-mail through Outlook doesn't hinder us. It's just kind of funny. I'm speaking to a standing-room-only audience at a major technology show because I use a different piece of software--that's hysterical.

You've pretty much gotten by with off-the-shelf software. Was it tough to find everything you needed in the open-source world?

Yeah, there are some things that are tough to find, like payroll software. We found something, and it works well. But the developers need to start writing the real-world applications people need to run a, art and design tools, that kind of stuff...They're all trying to build servers that already exist and do a whole bunch of stuff that's already out there...I think there's a lot of room to not just create an alternative to Microsoft but really take the next step and do something new.

Any thoughts on SCO's claims on Linux?

I don't know the merits of the lawsuit, but I run their Unix and I'm taking it off that system. I just don't like the way it's being handled. I feel like I'm being threatened again.

They never said anything to me, and if I was smart, I probably wouldn't mention it. But I don't like how they're doing it. What they're doing is casting a shadow over the whole Linux community. Look, when you've got Windows 98 not being supported, NT not being supported, OS/2 not being supported--if you're a decision maker in the IT field, you need to be able to look at Linux as something that's going to continue to be supported. It's a major consideration when you're making those decisions.

What if SCO wins?

There are too many what-ifs. What if they lose? What if IBM buys them? I really don't know, and I'll cross that bridge when I come to it. But I can't believe somebody really wants to claim ownership of's not going to make me think twice.

You see, I'm not in this just to get free software. No. 1, I don't think there's any such thing as free software. I think there's a cost in implementing all of it. How much of a cost depends on whom you talk to. Microsoft and some analysts will tell you about all the support calls and service problems. That's hysterical. Have they worked in my office? I can find out how many calls my guys have made to Red Hat, but I'm pretty sure the answer is none or close to it...It just doesn't crash as much as Windows. And I don't have to buy new computers every time they come out with a new release and abandon the old one.

Has Microsoft tried to win you back?

Microsoft is a growing business with $49 billion in the bank. What do they care about me? If they cared about me, they wouldn't have approached me the way they did in the first place...And I'm glad they didn't try to get me back. I thank them for opening my eyes, because I'm definitely money ahead now and I'm definitely just as productive, and I don't have any problems communicating with my customers. So thank you, Microsoft.  clip_image001

Copyright ©1995-2006 CNET Networks, Inc. All rights reserved.

File Naming - Paths - Namespaces

All file systems supported by Windows use the concept of files and directories to access data stored on a disk or device. Windows developers working with the Windows APIs for file and device I/O should understand the various rules, conventions, and limitations of names for files and directories.

Data can be accessed from disks, devices, and network shares using file I/O APIs. Files and directories, along with namespaces, are part of the concept of a path, which is a string representation of where to get the data regardless if it's from a disk or a device or a network connection for a specific operation.

Some file systems, such as NTFS, support linked files and directories, which also follow file naming conventions and rules just as a regular file or directory would.

File and Directory Names

All file systems follow the same general naming conventions for an individual file: a base file name and an optional extension, separated by a period. However, each file system, such as NTFS, CDFS, exFAT, UDFS, FAT, and FAT32, can have specific and differing rules about the formation of the individual components in the path to a directory or file. Note that a directory is simply a file with a special attribute designating it as a directory, but otherwise must follow all the same naming rules as a regular file. Because the term directory simply refers to a special type of file as far as the file system is concerned, some reference material will use the general term file to encompass both concepts of directories and data files as such. Because of this, unless otherwise specified, any naming or usage rules or examples for a file should also apply to a directory. The term path refers to one or more directories, backslashes, and possibly a volume name. For more information, see the Paths section.

Character count limitations can also be different and can vary depending on the file system and path name prefix format used. This is further complicated by support for backward compatibility mechanisms. For example, the older MS-DOS FAT file system supports a maximum of 8 characters for the base file name and 3 characters for the extension, for a total of 12 characters including the dot separator. This is commonly known as an 8.3 file name. The Windows FAT and NTFS file systems are not limited to 8.3 file names, because they have long file name support, but they still support the 8.3 version of long file names.

Naming Conventions

The following fundamental rules enable applications to create and process valid names for files and directories, regardless of the file system:

  • Use a period to separate the base file name from the extension in the name of a directory or file.

  • Use a backslash (\) to separate the components of a path. The backslash divides the file name from the path to it, and one directory name from another directory name in a path. You cannot use a backslash in the name for the actual file or directory because it is a reserved character that separates the names into components.

  • Use a backslash as required as part of volume names, for example, the "C:\" in "C:\path\file" or the "\\server\share" in "\\server\share\path\file" for Universal Naming Convention (UNC) names. For more information about UNC names, see the Maximum Path Length Limitation section.

  • Do not assume case sensitivity. For example, consider the names OSCAR, Oscar, and oscar to be the same, even though some file systems (such as a POSIX-compliant file system) may consider them as different. Note that NTFS supports POSIX semantics for case sensitivity but this is not the default behavior. For more information, see CreateFile.

  • Volume designators (drive letters) are similarly case-insensitive. For example, "D:\" and "d:\" refer to the same volume.

  • Use any character in the current code page for a name, including Unicode characters and characters in the extended character set (128–255), except for the following:

    • The following reserved characters:

      • < (less than)
      • > (greater than)
      • : (colon)
      • " (double quote)
      • / (forward slash)
      • \ (backslash)
      • | (vertical bar or pipe)
      • ? (question mark)
      • * (asterisk)
    • Integer value zero, sometimes referred to as the ASCII NUL character.

    • Characters whose integer representations are in the range from 1 through 31, except for alternate data streams where these characters are allowed. For more information about file streams, see File Streams.

    • Any other character that the target file system does not allow.

  • Use a period as a directory component in a path to represent the current directory, for example ".\temp.txt". For more information, see Paths.

  • Use two consecutive periods (..) as a directory component in a path to represent the parent of the current directory, for example "..\temp.txt". For more information, see Paths.

  • Do not use the following reserved names for the name of a file:

    CON, PRN, AUX, NUL, COM1, COM2, COM3, COM4, COM5, COM6, COM7, COM8, COM9, LPT1, LPT2, LPT3, LPT4, LPT5, LPT6, LPT7, LPT8, and LPT9. Also avoid these names followed immediately by an extension; for example, NUL.txt is not recommended. For more information, see Namespaces.

  • Do not end a file or directory name with a space or a period. Although the underlying file system may support such names, the Windows shell and user interface does not. However, it is acceptable to specify a period as the first character of a name. For example, ".temp".

Short vs. Long Names

A long file name is considered to be any file name that exceeds the short MS-DOS (also called 8.3) style naming convention. When you create a long file name, Windows may also create a short 8.3 form of the name, called the 8.3 alias or short name, and store it on disk also. This 8.3 aliasing can be disabled for performance reasons either systemwide or for a specified volume, depending on the particular file system.

Windows Server 2008, Windows Vista, Windows Server 2003 and Windows XP: 8.3 aliasing cannot be disabled for specified volumes until Windows 7 and Windows Server 2008 R2.

On many file systems, a file name will contain a tilde (~) within each component of the name that is too long to comply with 8.3 naming rules.


Not all file systems follow the tilde substitution convention, and systems can be configured to disable 8.3 alias generation even if they normally support it. Therefore, do not make the assumption that the 8.3 alias already exists on-disk.


To request 8.3 file names, long file names, or the full path of a file from the system, consider the following options:

  • To get the 8.3 form of a long file name, use the GetShortPathName function.
  • To get the long file name version of a short name, use the GetLongPathName function.
  • To get the full path to a file, use the GetFullPathName function.

On newer file systems, such as NTFS, exFAT, UDFS, and FAT32, Windows stores the long file names on disk in Unicode, which means that the original long file name is always preserved. This is true even if a long file name contains extended characters, regardless of the code page that is active during a disk read or write operation.

Files using long file names can be copied between NTFS file system partitions and Windows FAT file system partitions without losing any file name information. This may not be true for the older MS-DOS FAT and some types of CDFS (CD-ROM) file systems, depending on the actual file name. In this case, the short file name is substituted if possible.


The path to a specified file consists of one or more components, separated by a special character (a backslash), with each component usually being a directory name or file name, but with some notable exceptions discussed below. It is often critical to the system's interpretation of a path what the beginning, or prefix, of the path looks like. This prefix determines the namespace the path is using, and additionally what special characters are used in which position within the path, including the last character.

If a component of a path is a file name, it must be the last component.

Each component of a path will also be constrained by the maximum length specified for a particular file system. In general, these rules fall into two categories: short and long. Note that directory names are stored by the file system as a special type of file, but naming rules for files also apply to directory names. To summarize, a path is simply the string representation of the hierarchy between all of the directories that exist for a particular file or directory name.

Fully Qualified vs. Relative Paths

For Windows API functions that manipulate files, file names can often be relative to the current directory, while some APIs require a fully qualified path. A file name is relative to the current directory if it does not begin with one of the following:

  • A UNC name of any format, which always start with two backslash characters ("\\"). For more information, see the next section.
  • A disk designator with a backslash, for example "C:\" or "d:\".
  • A single backslash, for example, "\directory" or "\file.txt". This is also referred to as an absolute path.

If a file name begins with only a disk designator but not the backslash after the colon, it is interpreted as a relative path to the current directory on the drive with the specified letter. Note that the current directory may or may not be the root directory depending on what it was set to during the most recent "change directory" operation on that disk. Examples of this format are as follows:

  • "C:tmp.txt" refers to a file named "tmp.txt" in the current directory on drive C.
  • "C:tempdir\tmp.txt" refers to a file in a subdirectory to the current directory on drive C.

A path is also said to be relative if it contains "double-dots"; that is, two periods together in one component of the path. This special specifier is used to denote the directory above the current directory, otherwise known as the "parent directory". Examples of this format are as follows:

  • "..\tmp.txt" specifies a file named tmp.txt located in the parent of the current directory.
  • "..\..\tmp.txt" specifies a file that is two directories above the current directory.
  • "..\tempdir\tmp.txt" specifies a file named tmp.txt located in a directory named tempdir that is a peer directory to the current directory.

Relative paths can combine both example types, for example "C:..\tmp.txt". This is useful because, although the system keeps track of the current drive along with the current directory of that drive, it also keeps track of the current directories in each of the different drive letters (if your system has more than one), regardless of which drive designator is set as the current drive.

Maximum Path Length Limitation

In the Windows API (with some exceptions discussed in the following paragraphs), the maximum length for a path is MAX_PATH, which is defined as 260 characters. A local path is structured in the following order: drive letter, colon, backslash, name components separated by backslashes, and a terminating null character. For example, the maximum path on drive D is "D:\some 256-character path string<NUL>" where "<NUL>" represents the invisible terminating null character for the current system codepage. (The characters < > are used here for visual clarity and cannot be part of a valid path string.)


File I/O functions in the Windows API convert "/" to "\" as part of converting the name to an NT-style name, except when using the "\\?\" prefix as detailed in the following sections.

The Windows API has many functions that also have Unicode versions to permit an extended-length path for a maximum total path length of 32,767 characters. This type of path is composed of components separated by backslashes, each up to the value returned in the lpMaximumComponentLength parameter of the GetVolumeInformation function (this value is commonly 255 characters). To specify an extended-length path, use the "\\?\" prefix. For example, "\\?\D:\very long path".


The maximum path of 32,767 characters is approximate, because the "\\?\" prefix may be expanded to a longer string by the system at run time, and this expansion applies to the total length.

The "\\?\" prefix can also be used with paths constructed according to the universal naming convention (UNC). To specify such a path using UNC, use the "\\?\UNC\" prefix. For example, "\\?\UNC\server\share", where "server" is the name of the computer and "share" is the name of the shared folder. These prefixes are not used as part of the path itself. They indicate that the path should be passed to the system with minimal modification, which means that you cannot use forward slashes to represent path separators, or a period to represent the current directory, or double dots to represent the parent directory. Because you cannot use the "\\?\" prefix with a relative path, relative paths are always limited to a total of MAX_PATH characters.

There is no need to perform any Unicode normalization on path and file name strings for use by the Windows file I/O API functions because the file system treats path and file names as an opaque sequence of WCHARs. Any normalization that your application requires should be performed with this in mind, external of any calls to related Windows file I/O API functions.

When using an API to create a directory, the specified path cannot be so long that you cannot append an 8.3 file name (that is, the directory name cannot exceed MAX_PATH minus 12).

The shell and the file system have different requirements. It is possible to create a path with the Windows API that the shell user interface is not able to interpret properly.

Enable Long Paths in Windows 10, Version 1607, and Later

Starting in Windows 10, version 1607, MAX_PATH limitations have been removed from common Win32 file and directory functions. However, you must opt-in to the new behavior.

To enable the new long path behavior, both of the following conditions must be met:

  • The registry key HKLM\SYSTEM\CurrentControlSet\Control\FileSystem LongPathsEnabled (Type: REG_DWORD) must exist and be set to 1. The key's value will be cached by the system (per process) after the first call to an affected Win32 file or directory function (see below for the list of functions). The registry key will not be reloaded during the lifetime of the process. In order for all apps on the system to recognize the value of the key, a reboot might be required because some processes may have started before the key was set.


    This registry key can also be controlled via Group Policy at Computer Configuration > Administrative Templates > System > Filesystem > Enable NTFS long paths.

  • The application manifest must also include the longPathAware element.


    <application xmlns="urn:schemas-microsoft-com:asm.v3">
        <windowsSettings xmlns:ws2="">

These are the directory management functions that no longer have MAX_PATH restrictions if you opt-in to long path behavior: CreateDirectoryW, CreateDirectoryExW GetCurrentDirectoryW RemoveDirectoryW SetCurrentDirectoryW.

These are the file management functions that no longer have MAX_PATH restrictions if you opt-in to long path behavior: CopyFileW, CopyFile2, CopyFileExW, CreateFileW, CreateFile2, CreateHardLinkW, CreateSymbolicLinkW, DeleteFileW, FindFirstFileW, FindFirstFileExW, FindNextFileW, GetFileAttributesW, GetFileAttributesExW, SetFileAttributesW, GetFullPathNameW, GetLongPathNameW, MoveFileW, MoveFileExW, MoveFileWithProgressW, ReplaceFileW, SearchPathW, FindFirstFileNameW, FindNextFileNameW, FindFirstStreamW, FindNextStreamW, GetCompressedFileSizeW, GetFinalPathNameByHandleW.


There are two main categories of namespace conventions used in the Windows APIs, commonly referred to as NT namespaces and the Win32 namespaces. The NT namespace was designed to be the lowest level namespace on which other subsystems and namespaces could exist, including the Win32 subsystem and, by extension, the Win32 namespaces. POSIX is another example of a subsystem in Windows that is built on top of the NT namespace. Early versions of Windows also defined several predefined, or reserved, names for certain special devices such as communications (serial and parallel) ports and the default display console as part of what is now called the NT device namespace, and are still supported in current versions of Windows for backward compatibility.

Win32 File Namespaces

The Win32 namespace prefixing and conventions are summarized in this section and the following section, with descriptions of how they are used. Note that these examples are intended for use with the Windows API functions and do not all necessarily work with Windows shell applications such as Windows Explorer. For this reason there is a wider range of possible paths than is usually available from Windows shell applications, and Windows applications that take advantage of this can be developed using these namespace conventions.

For file I/O, the "\\?\" prefix to a path string tells the Windows APIs to disable all string parsing and to send the string that follows it straight to the file system. For example, if the file system supports large paths and file names, you can exceed the MAX_PATH limits that are otherwise enforced by the Windows APIs. For more information about the normal maximum path limitation, see the previous section Maximum Path Length Limitation.

Because it turns off automatic expansion of the path string, the "\\?\" prefix also allows the use of ".." and "." in the path names, which can be useful if you are attempting to perform operations on a file with these otherwise reserved relative path specifiers as part of the fully qualified path.

Many but not all file I/O APIs support "\\?\"; you should look at the reference topic for each API to be sure.

Win32 Device Namespaces

The "\\.\" prefix will access the Win32 device namespace instead of the Win32 file namespace. This is how access to physical disks and volumes is accomplished directly, without going through the file system, if the API supports this type of access. You can access many devices other than disks this way (using the CreateFile and DefineDosDevice functions, for example).

For example, if you want to open the system's serial communications port 1, you can use "COM1" in the call to the CreateFile function. This works because COM1–COM9 are part of the reserved names in the NT namespace, although using the "\\.\" prefix will also work with these device names. By comparison, if you have a 100 port serial expansion board installed and want to open COM56, you cannot open it using "COM56" because there is no predefined NT namespace for COM56. You will need to open it using "\\.\COM56" because "\\.\" goes directly to the device namespace without attempting to locate a predefined alias.

Another example of using the Win32 device namespace is using the CreateFile function with "\\.\PhysicalDiskX" (where X is a valid integer value) or "\\.\CdRomX". This allows you to access those devices directly, bypassing the file system. This works because these device names are created by the system as these devices are enumerated, and some drivers will also create other aliases in the system. For example, the device driver that implements the name "C:\" has its own namespace that also happens to be the file system.

APIs that go through the CreateFile function generally work with the "\\.\" prefix because CreateFile is the function used to open both files and devices, depending on the parameters you use.

If you're working with Windows API functions, you should use the "\\.\" prefix to access devices only and not files.

Most APIs won't support "\\.\"; only those that are designed to work with the device namespace will recognize it. Always check the reference topic for each API to be sure.

NT Namespaces

There are also APIs that allow the use of the NT namespace convention, but the Windows Object Manager makes that unnecessary in most cases. To illustrate, it is useful to browse the Windows namespaces in the system object browser using the Windows Sysinternals WinObj tool. When you run this tool, what you see is the NT namespace beginning at the root, or "\". The subfolder called "Global??" is where the Win32 namespace resides. Named device objects reside in the NT namespace within the "Device" subdirectory. Here you may also find Serial0 and Serial1, the device objects representing the first two COM ports if present on your system. A device object representing a volume would be something like "HarddiskVolume1", although the numeric suffix may vary. The name "DR0" under subdirectory "Harddisk0" is an example of the device object representing a disk, and so on.

To make these device objects accessible by Windows applications, the device drivers create a symbolic link (symlink) in the Win32 namespace, "Global??", to their respective device objects. For example, COM0 and COM1 under the "Global??" subdirectory are simply symlinks to Serial0 and Serial1, "C:" is a symlink to HarddiskVolume1, "Physicaldrive0" is a symlink to DR0, and so on. Without a symlink, a specified device "Xxx" will not be available to any Windows application using Win32 namespace conventions as described previously. However, a handle could be opened to that device using any APIs that support the NT namespace absolute path of the format "\Device\Xxx".

With the addition of multi-user support via Terminal Services and virtual machines, it has further become necessary to virtualize the system-wide root device within the Win32 namespace. This was accomplished by adding the symlink named "GLOBALROOT" to the Win32 namespace, which you can see in the "Global??" subdirectory of the WinObj browser tool previously discussed, and can access via the path "\\?\GLOBALROOT". This prefix ensures that the path following it looks in the true root path of the system object manager and not a session-dependent path.

Five things Linux Must Do To Beat Windows 8

(Steven J.Vaughan-Nichols ZiffDavis) In 2007, thanks to netbooks and Vista, Linux briefly exploded onto the desktop.  Microsoft soon realized they were losing the low-end laptop market and they brought XP back from the dead and practically gave it away to original equipment manufacturers (OEM)s. It worked. Linux's popularity receded.  In 2012, Microsoft is once more bringing out a dog of a desktop operating system, Windows 8, so desktop Linux will once more get a chance to shine... if it can.

Linux is more than good enough on the desktop. Just ask Google, which used its own Ubuntu-spin, Goobuntu, not just for its engineering desktops but for everyone's PCs.

While much of the reason why Linux hasn't gone much of anywhere on the desktop has been because of Microsoft's iron-grip on OEMs and anti-Linux FUD, Linux hasn't helped itself much either. So what can Linux do to be as competitive as the Mac with Windows?

5) Give independent software vendors (ISV)s more support.

I, and a lot more important Linux figures than I am, such as Linus Torvalds, think Miguel de Icaza, one of the GNOME's Linux desktop creators in his article What Killed The Linux Desktop was often off-base. But, de Icaza did make some good points. One of the most important of these was that “no two Linux distributions agreed on which core components the system should use. Either they did not agree, the schedule of the transitions were out of sync or there were competing implementations for the same functionality.”

Sure, fundamental programs work on all versions of Linux, but say you're an ISV, what desktop should you build for? KDE? The slumping GNOME? Ubuntu's Unity? My own favorite Linux Mint Cinnamon?

A first look at Ubuntu Linux 12.04's Unity desktop (Gallery)

If I'm an ISV the last thing I want to do is throw money and time into crafting half-a-dozen versions of my user-interface for each significant Linux desktop. On the other hand, some ISVs, such as game maker Valve, has looked at Windows 8, turned its back on it, and are now moving to Linux. That's great but Linux needs to do more to encourage ISVs. 

De Icaza thinks the only way Linux on the mainstream desktop will ever take off is “to take one distro, one set of components as a baseline, abandon everything else and everyone should just contribute to this single Linux. Whether this is Canonical's Ubuntu, or Red Hat's Fedora or Debian's system or a new joint effort.”

He's right. I think that's been Canonical plan for Ubuntu all along. Linux pros may not care much for Unity, but even the most un-techie people on the planet can use Ubuntu Linux with Unity. While lots of great distributions, such as Mint, are meant for desktop users, only Ubuntu really targets the mass-market. If I were an ISV, Ubuntu would be my Linux of choice. After all, it's already Valve's pick.

4) Slow down the pace of change.

I like playing with the newest toys more than most people. Most hardcore Linux users do. Josephine User doesn't want to deal with a major update of her desktop every six months. That's why the successful Linux vendors—Canonical, Red Hat, and SUSE—release long term support versions of their operating systems.

Three years, not six months, is an update cadence that works for most people. Yes, that may mean your desktop release is running the Linux 3.5 kernel. Do you really think most people care about that? They don't. There's a reason why Windows XP, after 11-years in the top desktop spot—has only now been overtaken by Windows 7. People prefer King Log over King Stork. They may say they want the shiniest gizmos but at the end of the day they want their desktop to look and work the same as they did the day before.

That's a lesson that both Microsoft, with Windows 8 Metro, and Linux distributions that default to GNOME 3.x should learn.

3) Work even harder to get  low-level hardware vendor support.

Sure, you really can run Linux on pretty much any PC today—goodness knows I do—but if you want to make the most of your hardware, the vendors, like NVIDIA, still don't deliver the driver goods.

There's not a lot the Linux distributions can do about this. I mean if Red Hat wants a server equipment OEM to listen, they'll pay attention. Red Hat is a major server player. But, no one in the desktop space has that kind of clout. The only thing Linux can do is to offer to build Linux drivers for the OEMs. And, indeed, under Greg Kroah-Hartman's guidance Linux developers have been building free Linux hardware drivers for years. Even now, though, too many OEMs won't accept this free offer.

2) Pound on PC vendors' doors.

Over the years, Dell, HP, and Lenovo has all fooled around with pre-installed desktop Linux. Even now if you're Joe Consumer you can't just go to their Web sites or a store and be sure you can buy a Linux PC or laptop. Outside of the US and Western Europe, it's actually easier to get Linux PCs.

Yes, it's actually easy to install Linux on a PC—I do it at least every other week—but most people won't go to the trouble.

We must have more vendors supporting pre-installed Linux desktops. It's great that we have System76 and ZaReason, but we need the big vendors to fully commit to the Linux desktop as well. I mean it's nice that Dell is well on its way to producing a high-end laptop, the Sputnik, for Linux developers, but it would be better still if you could currently order a run-of-the-mill Dell with Ubuntu as well.

At the same time, Linux computers should cost less than their Windows relations. After all, Linux doesn't cost an OEM anything like as much as Windows does. Nevertheless, the first Linux Ultrabook laptop costs as much as its Windows brother.

What the Linux distributors can do here is simply promote Linux on the desktop more to the OEMs. As far as I can tell only Canonical, once again, is really making a determined effort to promote the traditional Linux desktop. If you really want to see Fedora, openSUSE, whatever, Linux desktops in the market their distributors need to get on the stick and start pushing and working with OEMs.

1) Linux distributors need to take the traditional desktop seriously.

You know, I think it's wonderful that Linux, thanks to Android, is ruling smartphones and the new generation of Android tablets, such as the Nexus 7 and the Amazon Kindle Fire HD are finally giving the iPad competition. But, the desktop is not going away anytime soon.

I like my fancy tablets as much as anyone does but when it comes to punching in words or keying in data give me a real computer with a real keyboard any day of the week. That's not going to change.

Only two Linux companies seem to get this. One, of course, is Canonical. The other is Google with its Chrome OS and Chromebooks. Google is trying its best to get you to buy, and now rent, Chromebooks. Google gets it. Google may be the king of the Internet, and Chrome OS may be just the Chrome Web browser on top of a thin layer of Linux, but they know the CPU on the desk with a keyboard in front of it is far from dead.

If we really want to see Linux desktops compete, you have a couple of choices. One, you can start supporting Ubuntu or Chrome OS, since they're only Linux distributions that seem to take the business of the Linux desktop seriously. If not them, then the Linux community must back another distribution to the hilt

You see, de Icaza was right on one fundamental point. For the Linux desktop to really take off, we must “take one distro, one set of components as a baseline, abandon everything else and everyone should just contribute to this single Linux.” Then, and only then, will we have a desktop Linux that will be able to really take advantage of the opportunity that Microsoft is handing us with Windows 8.

Read More - Click Here!

Fix Ubuntu Without Reinstall

This brief tutorial describes how to easily fix broken Ubuntu OS without losing data and without reinstalling it completely.

Fix Broken Ubuntu OS

First of all, try to login with live cd and backup your data in an external drive. Just in case, if this method didn’t work, you can still have your data and reinstall everything!

At the login screen, press CTRL+ALT+F1 to switch to tty1. You can learn more about switching between TTYs here.

Now, type the following commands one by one to fix the broken Ubuntu Linux.

$ sudo rm /var/lib/apt/lists/lock
$ sudo rm /var/lib/dpkg/lock
$ sudo rm /var/lib/dpkg/lock-frontend
$ sudo dpkg --configure -a
$ sudo apt-get clean
$ sudo apt-get update --fix-missing
$ sudo apt-get install -f
$ sudo dpkg --configure -a
$ sudo apt-get upgrade
$ sudo apt dist-upgrade

Finally, reboot the system using command:

$ sudo shutdown -r now

You can now be able to login to your Ubuntu system as usual.

After I followed these steps, all of my data in Ubuntu 18.04 test system was there and everything is the same as I left it. This method may not work for everyone. However, this small tip worked for me and saved a couple minutes from reinstalling. If you know any other better way, please let me know in the comment section. I will add them in this guide as well.

Free Lunch Free Software

You get what you pay for with software
Even with budget systems available, computers are a sizeable investment. Fortunately, the software you install doesn't have to add to the bill.
Modern computer users are lucky to have a vast and growing library of free, open-source software available. Open-source free software can save you hundreds or thousands of dollars over commercial programs without sacrificing essential features.

LibreOffice and Thunderbird, for example, are free programs that can effectively replace Microsoft Office and Outlook. GIMP is a popular free alternative to Adobe Photoshop.

Free software isn't about all work and no play. VLC is one of the best media players available. It will handle nearly any video or movie format you throw at it, including DVD and Blu-ray.

Free Online Net Meetings with MeetingBurner

Thanks to telework, remote offices and global economies, fewer meetings are conducted entirely face-to-face in a single location. Technology has been slow to keep up though, with most online meeting tools about as convenient as a root canal. Here's a new meeting tool that makes meeting online fast, efficient and productive.

MeetingBurner is a web-based meeting tool that is among the cream of the crop. It's fast and easy to join -- I was able to host my first meeting within about 5 minutes of arriving at the site for the very first time. When you start a meeting, you get a URL to share as well as a call-in number that you can Skype or dial normally. MeetingBurner also offers to send invites to attendees for you automatically -- and no one has to actually sing up for MeetingBurner to join the meeting except for you, the host, so you're not slowing down the meeting or spamming attendees with yet another account to join.

Read More - Click Here!

FrontAccounting Installation


  • A working HTTP web server eg. Apache, IIS.
  • PHP installed on the web server.
  • A working MySQL server - with innodb tables enabled (see notes below)
  • Adobe Acrobat Reader - or another PDF reader for viewing the PDF reports before printing them out.

Important Notes

  • One critical aspect of the PHP installation is the setting of session.auto_start in the php.ini file. Some rpm distributions of PHP have the default setting of session.auto_start = 1. This starts a new session at the beginning of each script. However, this makes it impossible to instantiate any class objects that the system relies on. Classes are used extensively by this system. When sessions are required they are started by the system and this setting of session.auto_start can and should be set to 0.
  • For security reasons both Register Globals and Magic Quotes php settings should be set to Off. When FrontAccounting is used with www server running php as Apache module, respective flags are set in .htaccess file. When your server uses CGI interface to PHP you should set magic_quotes_gpc = 0 and register_globals = 0 in php.ini file.
  • Innodb tables must be enabled in the MySQL server. These tables allow database transactions which are a critical component of the software. This is enabled by default in the newer versions of MySQL. If you need to enable it yourself, consult the MySQL manual.
  • FrontAccounting is implemented and tested with MySQL. Generally it should work with other databases, but this is not supported in any way at the moment.

Copying all the project files to the correct directory

  • You must obviously have downloaded the project archive to be reading this file.
  • All the files inside this archive should be copied to a directory under the web server root directory.
  • For example, create a folder called /account, and extract the archive into this folder.

Installation Steps

  1. If you have the option to create multiple databases on your host, create one, fi. frontacc, otherwise write down the database name for your account. At the same time look up the username and password for the database. You will need these informations during the wizard install.
  2. Enter your_url/account (or whatever directory you entered). This will run the install wizard if this is the first time you run FrontAccounting, setup a drill company and optional populating with initial demo data. You can later on create your own real company. It is a good idea to get familiar with the system before starting your own company. 
  3. After successfully install, remove or rename your install directory for safety reasons. You don't need it any more.

 Logging In For the First Time

  1. Pleae ensure that the folder /company/0 on the server is writable.
  2. Open a browser and enter the URL for the web server directory where FrontAccounting is installed.
  3. Enter the user name:  'admin'
  4. Enter the password you created during install.
  5. (NB : enter without quotation marks).
  6. You can set up additional user accounts from the System Setup tab. Be careful not to delete the demonstration user until a new user has been set up. If there are no users defined the next time you try to login you won't be able to. The only way then to create a user to login with is to manually edit the SQL table "users" to insert a user.

Setting Up Company Specific Data

  1. All the standing configuration data is defined from the Setup tab and each link should be reviewed to enter appropriate data for the business.


  1. If FrontAccounting is installed locally, you may have the session save path not set correctly. Normally this is set in your php.ini (for Windows). The entry is called session.save_path. Make sure this is set to a directory that actually exists. The default is set to /tmp, which may not be valid.
  2. If you are installing FrontAccounting onto a shared server, you may have to set the session save path within FrontAccounting. At the top of  /includes/ you will find this line :
  3. Uncomment this line and set the path to a directory that exists on your server. Make sure that you have read/write privileges on this directory.

Full-featured Ubuntu online installation using kickstart by jhansonxi

This is an elaborate fault-tolerant Kickstart script for an on-line Ubuntu installation, optimized for home users, with extensive remote administration support and documentation. Not recommended for beginners.

This isn't just another trivial automated installation script although it started out that way. Basic installation presets led to integrated bug workarounds, setting defaults for many applications and servers, more features, etc. While you may disagree with some of my package choices, they were selected for my clients - not you. Change it if you have different needs. First, a little background on my deployments.

All of my clients have cheap desktop systems or laptops, usually outdated. Almost any CPU, chipset, GPU, and drive configuration. They're either stand-alone or connected together on small Ethernet networks. Some have broadband, some only dial-up (POTS). Ages vary from toddlers to senior citizens. A few are Windows gamers. This mix results in a wide variety of system hardware, peripherals, application requirements, and configurations. I've had to deal with most every type of kernel, application, and hardware bug. Every deployment unearths a new bug to fight. Some of these are Ubuntu's fault but many are upstream.

Inevitably I spend many hours doing full OS conversions to Ubuntu or dual-boot configurations. I've found that using a Live CD to install Ubuntu is about 4x faster than installing Windows when drivers, updates, and application installs are figured in. While I could set up slipstream builds of Windows I don't install it enough to bother with and the variety of versions (Home, Pro, upgrade, OEM,...) and licenses makes it impractical. Relatively speaking, I spend about 3x as long transferring documents, settings, and game/application files (scattered all over C:) to Ubuntu than I do installing either it or Windows. But I'll take any time savings I can get.

A while back, when Ubuntu 10.04 (Lucid Lynx) was released, I decided to streamline my installations. This wasn't just to save time. I also needed to make my installations more uniform as I couldn't remember all the various tweaks and bug fixes that I performed from installation to the next.
I had several goals for this project, not necessarily all at the beginning as some were the result of test installs, client feedback, and feature creep.

  1. Fix all the bugs that my clients encountered on their existing installs plus all the other Ubuntu annoyances I've been manually correcting.
  3. Do everything the "correct way" instead of blindly following HOW-TOs from amateurs that involved script and text file hacking that would be lost on the next update. I had to learn proper use of Gconf, PolicyKit, Upstart, init scripts, and dpkg.
  5. Configure all of the network features that my clients had asked for, usually file or peripheral sharing. Internet content filtering for kids was a requirement.
  7. Secure remote access and administration. It's bad enough when a client has a software problem. Having to waste time with an on-site visit is idiotic when it's not an Internet access problem and a broadband connection is available. The same kickstart configuration can be used for both an "administration" system as well as clients. Having them nearly identical makes both remote and verbal support easier.
  9. Make it easier to obtain diagnostic and status information, for me and the client.
  11. Research applications that meet customer needs and are stable. Configure them so the customer doesn't need to.
  13. Document everything, especially anything I spent significant time researching.

On all of these I mostly succeeded. There are still a few gaps but they're minor (for my deployments at least) but after working on this for 18 months I needed to get on with my life. I figure that after a few million deployments I should break even. I'm now busy updating the dozen or so I currently have.

So what's in it? The base is just a plain 10.04 (i386 or amd64) installation. Two reasons for that - it's the LTS release and I didn't have time to upgrade to newer releases or workaround their new bugs. It's supported for another year or so. I probably update it for 12.04 after it is released (and clean up my code). Highlights: 

Apache. Used for sharing the public directory (see below) and accessing the various web-based tools. The home page is generated from PHP and attempts to correct for port-forwarding (SSH tunnel) if it detects you are not using port 80.

Webmin. It's the standard for web-based administration. I added a module for ddclient (Dynamic DNS). The module is primitive but usable and I fixed the developer's Engrish.

DansGuardian. Probably three months work on just this. For content filtering there isn't really anything else. Unfortunately it has almost no support tools so I had to write them. Most of these have been announced in previous blog postings although they've been updated since then. The most complicated is "dg-squid-control" which enables/disables Squid, DansGuardian, and various iptables rules. Another loads Shalla's blacklist. It doesn't have system group integration so I wrote "dg-filter-group-updater" to semi-integrate it. There are four filter groups - no access, restricted (whitelist with an index page), filtered, and unrestricted. I added a Webmin module for it I found on Sourceforge. It's not great but makes it easier to modify the grey and exception lists. Included are lists I wrote that allow access to mostly kid sites (a couple of hundred entries). The entries have wiki-style links in comments that are extracted by "dg-filter-group-2-index-gen" to create the restricted index page. There's a How-To page for proxy configuration that users are directed to when they try to bypass it.

The only limitation is that browser configurations are set to use the proxy by default but dg-squid-control doesn't have the ability to reset them if the proxy is disabled. I spent two weeks working on INI file parsing functions (many applications still use this bad Windows standard for configuration files). While they seem to work I need to significantly restructure the tool to make use of them.

DansGuardian had no development for a few years but recently a new maintainer is in charge and patches are being accepted. Hopefully full system account integration will be added.

UFW. The Uncomplicated Firewall is a front-end to iptables and there is a GUI for it. One feature it has is application profiles, which make it easy to create read-to-use filter rules. I created about 300 of them for almost every Linux service, application, or game (and and most Windows games on Wine).

File sharing. The /home/local directory is for local (non-network) file sharing between users on the same system. There is also a /home/public directory that is shared over Samba, HTTP, FTP, and NFS. WebDAV didn't make the cut this time around.

Recovery Mode. I added many scripts to the menu for status information from just about everything. Several of my tools are accessible from it.

SSH server. You make a key with ssh-keygen, client_administrator_id_dsa (should be encrypted), and include the public (*.pub) part in the kickstart_files/authentication sub-directory. It is added to the ssh configuration directory on every system. Using another tool, "remote-admin-key-control", system owners (sysowner group) can enable or disable remote access. This is for several reasons including privacy, liability, and accounting (for corporate clients where the person requesting support may not have purchase authority).

When the remote-admin-key-control adds the key to the administrator account ~/.ssh/authorized_keys, you can connect to the system without a password using the private key (you still need to enter the key passphrase). The radmin-ssh tool takes this one step further and forwards the ports for every major network service that can function over ssh. It also shows example command lines (based on the current connection) for scp, sftp, sshfs, and NFS. You still need the administrator password to get root access.

X2Go. Remote desktop access that's faster than VNC. Uses SSH (and the same key).

OpenVPN. A partially configured Remote Technical Support VPN connection is installed and available through Network Manager. If the client system is behind a firewall that you can't SSH through, the client can activate this VPN to connect to your administration system so that you can SSH back through it. Rules for iptables can be enabled that prevent the client accessing anything on the administration system. It connects using 443/udp so should work through most firewalls.

Books and guides. Located in the desktop help menu (System > Help) is a menu entry that opens a directory for books. My deployments have subdirectories with Getting Started with Ubuntu 10.04 - Second Edition from the Ubuntu Manual Project and user guides. You can easily add more as the kickstart script grabs everything in its local-books subdirectory. For the end-user I wrote networks-and-file-sharing-help.html (in the same help menu).

For the installer the main source of documentation is the kickstart script itself. I got a little carried away with comments. The next major document is TODO.html which is added to the administrator's home directory. It was intended to list post-install tasks that needed to be completed since there are many things the installer can't do (like compile kernel modules). After adding background information on the various tasks, troubleshooting help, and example commands, it's basically another book. You should read it before using the kickstart script.

Scanner Server. Allows remote access to a scanner through a web interface. Simpler than using saned (but that is also available if you enable it). It had several bugs so I fixed it and added a few features (with help from a Ubuntu Forum member pqwoerituytrueiwoq). Eventually we hit the limit of what it could do so pqwoerituytrueiwoq started writing PHP Server Scanner as a replacement. For a 12.04 release I will probably use that instead. I wrote "scanner-access-enabler" to work around udev permission problems with some scanners (especially SCSI models).

Notifications. Pop-up notices will be shown from smartd, mdadm, sshd, and OpenVPN when something significant happens. Without the first two the user doesn't know about pending drive problems until the system fails to boot. I've also had them turn the system off when I was in the process of updating it and the SSH notification helps prevent that. The OpenVPN notification is mostly for the administration system and includes the tunnel IP address of the client. OpenSSH has horrible support for this kind of scripting. OpenVPN's scripting support is absolutely beautiful.

Webcam Server. A command-line utility that I wrote a GUI for. It has a Java applet that can only be accessed locally but a static image is available from the internal web server to anywhere.

BackuPC. It uses its default directory for backups so don't enable it unless you mount something else there. A cron job will shut the system down after a backup if there are no users logged in. It has been somewhat hardened against abuse with wrapper scripts for tar and rsync.

There are many bugs, both big and small, that are either fixed or worked around. The script lists the numbers where applicable. The TODO documents lists a bunch also.  Some packages were added but later removed (Oracle/Sun Java due to a licensing problem, Moonlight since it didn't work with any Silverlight site I tested).

There are some limitations to Ubuntu's kickstart support. I'm not sure why I used kickstart in the first place. Perhaps the name reminded me of KiXtart, a tool I used when I was a Windows sysadmin. Kickstart scripts are the standard for automating Red Hat installations (preseeding is the Debian standard) but Ubuntu's version is a crippled clone of it. In part it acts like a preseed file (even has a "preseed" command) but also has sections for scripts that are exported and executed at different points during the installation. About 90% of the installation occurs during the "post-install" script. The worst problem with Ubuntu's kickstart support is that the scripts are exported twice and backslashes are expanded both times. This means that every backslash has to be quadrupled. This gets real ugly with sed and regular expressions. Because of this you'll see "original" and "extra slashy" versions of many command lines. I wrote quad-backslash-check to find mistakes.

The other problem is that the way the script is executed by the installer hides line numbers when syntax errors occur, making debugging difficult. I wrote quote-count and quote-count-query to find unmatched quotes (and trailing escaped whitespace that was supposed to be newlines) which were the most common cause of failure.

I've made an archive of my kickstart file, its support files, and configuration files for various services on my server for you to download (12.5MB, MD5: b5e79e6e287da38da75ea40d0d18f07f ). The script, error checking and ISO management tools, and server configuration files are in the "kickstart" sub-directory. A few packages are included because they are hard to find but others are excluded because of size. Where a package is missing there is a "file_listing.txt" file showing the name of the package I'm using. My installation includes the following which you should download and add back in:

Amazon MP3 Downloader (./Amazon/amazonmp3.deb)
DansGuardian Webmin Module (./DansGuardian Webmin Module/dgwebmin-0.7.1.wbm)
Desura client (./Desura/desura-i686.tar.gz)
G'MIC (./GMIC/gmic_1.5.0.7_*.deb)
Gourmet (./Gourmet/gourmet_0.15.7-1_all.deb)
VMware Player (./VMware/VMware-Player-*.bundle)

VMware Player is optional.  It has kernel modules so the kickstart script only retrieves the first install file returned from the web server whose name matches the architecture.  It puts it in /root for later installation.

The target systems need network-bootable Ethernet devices, either with integrated PXE clients or a bootable CD from ROM-o-matic.

You need a DHCP sever that can send out:

filename "pxelinux.0"

The tftp server needs to serve the pxelinux.0 bootstrap, vesamenu.c32, and the menu files. These are available from the Ubuntu netboot images. The bootstrap and vesamenu.c32 are identical between the i386 and amd64 versions, only the kernel, initrd, and menus are different. You can use my menu files instead of the standard set in the netboot archive. The most important is the "ubuntu.cfg" file. You'll notice that my menu files list many distros and versions. Only the utility, Knoppix, and Ubuntu menus function fully. The rest are unfinished (and probably obsolete) experiments. FreeDOS is for BIOS updates.

My tftp server is atftpd which works well except it has a 30MB or so limit on tftp transfers. This only affects the tftp version of Parted Magic (they have a script to split it up into 30MB parts). It is started by inetd on demand.

I use loopback-mounted ISOs for the kickstart installs and all LiveCDs netboots. Because I have so many, I exceeded the default maximum number of loopback nodes available. I set max_loop=128 in my server's kernel command line to allow for many more.

The Ubuntu Minimal CD ISOs are the source for the kernel and initrd for the kickstart install. The architecture (and release) of the kernel on these ISOs must match the architecture of Ubuntu you want to install on the target system. You'll probably want both the i386 and amd64 versions.

PXE Linux doesn't support symlinks so my ISOs are mounted in the tftp directory under ./isomnt. Symlinks to the ISOs are in ./isolnk and are the source of the mounts. I set it up this way originally because the ISOs were in /srv/linux in various subdirectories so having the links in one place made it easier to manage. But my ISO collection grew too big to manage manually so I wrote "tftp-iso-mount" that creates the mountings for me. It searches through my /srv/linux directory for ISO files and creates isomnt_fstab.txt that can be appended to fstab. It also deletes and recreates the isomnt and isolnk directories and creates the "isomnt-all" script to mount them.

The ISOs are accessed through both NFS and Apache. I originally intended to use NFS for everything but I found that debian-installer, which performs the installation and executes the kickstart script (also on the "alternate" ISOs), doesn't support NFS. So I had to set up Apace to serve them. The Apache configuration is rather simple. There are a few symlinks in /var/www that link to various directories elsewhere. One named "ubuntu" links to /srv/linux/Ubuntu. The kickstart support files are placed in /srv/linux/Ubuntu/kickstart_files and are accessed via the link. NFS is still used for booting LiveCDs (for bug testing and demos). There is also a "tftp" symlink to /srv/tftp used for local deb loading (see below).

The kickstart script itself, Ubuntu-10.04-alternate-desktop.cfg, is saved to /srv/tftp/kickstart/ubuntu/10.04/alternate-desktop.cfg after being backslash and quote checked.

Several preseed values are set with the "preseed" command at the beginning of the script. You'll probably want to change the time zone there. License agreements are pre-agreed to as they will halt the installation if they prompt for input.

Like I mentioned earlier, the vast majority of work happens in the post-install script. The executes after the base Ubuntu packages are installed. The most important variable to set is $add_files_root which must point to the URL and directory of your web server where the rest of the kickstart support files are located (no trailing backslash). The script adapts for 32-bit and 64-bit packages as needed based on the architecture of the netboot installer. There is also a "late_command" script that executes near the end of the installation, after debian-installer creates the administrator account (which happens after the post-install script finishes).

The debug variables are important for the initial tests. The $package_debug variable has the most impact as it will change package installations from large blocks installed in one pass (not "true") to each package individually ("true"). When true, it slows down installation significantly but you can find individual package failures in the kickseed-post-script.log and installer syslog (located in /var/log/installer after installation). Setting $wget_quiet to null will guarantee a huge log file. The $script_debug variable controls status messages from the package install and mirror selection functions.

The $mirror_list variable contains a list of Ubuntu mirrors (not Medibuntu or PPAs) that should have relatively similar update intervals. This is used by the fault-tolerant mirror selection function, f_mirror_chk, that will cycle through these and check for availability and stability (i.e., not in the middle of sync). The mirrors included in the list are good for the USA. These are exported to the apt directory so that the apt-mirror-rotate command can use them to change mirrors quickly from the command line or through the recovery mode menu. When a package fails to be installed via the f_ftdpkg and f_ftapt functions, another mirror will be tried to attempt to work around damaged packages or missing dependencies.

To save bandwidth the post-install script looks for loopback mounted ISOs of the Ubuntu 10.04 live CD and Ubuntu Studio (both i386 and amd64 versions) in the isomnt sub-directory via the tftp link in the Apache default site. It copies all debs it finds directly into the apt cache. It also copies the contents of several kickstart support sub-directories (game-debs* and local-debs*). This is a primitive way to serve the bulk of the packages locally while retrieving everything else from the mirrors. You need to change the URLs in the pre-load debs section to the "pool" sub-directories of the mounted ISOs in "./tftp/isomnt/".

Because loading this many debs can run a root volume out of space, the $game_debs variable can be used to prevent game packages from being retrieved. Normally you should have at least a 20GB root (/) volume although it could be made smaller with some experimentation. An alternative to this method would be a full deb-mirror or a large caching proxy.

Set the OpenVPN variables $openvpnurl to the Internet URL of your administration system or the firewall it's behind. Set $openvpnserver to the hostname of your administration system (which can have the same values as it won't be connecting to itself).

Basic usage starts with netbooting the client system. Some have to be set to netboot in the BIOS and some have a hotkey you can press at POST to access a boot selection menu. The system then obtains an address and BOOTP information from the DHCP server. It then loads pxelinux.0 from the TFTP server which will in turn load vesamenu.c32 which displays the "Netboot main menu". Select Ubuntu from the list and look for the Ubuntu 10.04 Minimal CD KS entries. Select the one for your architecture and press the Tab key to edit the kernel boot line. Set any kernel parameters you want to be added to the default Grub2 configuration after the double dash (--), like "nomodeset". Set the hostname and domain values for the target as these are used in several places for bug workarounds and configurations. Then press Enter. The installer should boot. If nothing happens when you press Enter and you are returned to the Ubuntu boot listing menu, verify the ISOs are mounted on the server then try again (you will need to edit the entry again).

If there are no problems then you will be asked only two questions. The first is drive partitioning. This can be automated but my client systems are too different to do so. Then next question will be the administrator password. After that it will execute the post-install script and late-command scripts then prompt you to reboot. Just hit the enter key when it does as Ctrl-Alt-Delete will prevent the installer from properly finishing the installation (it's not quite done when it says it is). Full installation will take 2-3 hours depending on debug settings, availability of local debs, and Internet speeds.

In case of problems see the TODO document which has a troubleshooting section. The only problems I've had installing was missing drivers or bugs in the installer (especially with encrypted drives - see the TODO). My Dell Inspiron 11z, which has an Atheros AR8132/L1c Ethernet device, wasn't supported by the kernel the minimal CD was using. To work around it I made a second network connection with an old Linksys USB100TX. The Atheros did the netboot (the Linksys does not have the capability) but the installer only saw the Linksys afterwards and had no problems using it (other than it being slow).

German cities following Munich's open source example

Municipal administrations in Germany are starting to follow the example of the city of Munich, and increase their use of free and open source software, reports the Financial Times Deutschland on 3 January. "The demand for open source is growing - and not only at public administrations", according the newspaper. It mentions the cities of Freiburg and Jena as examples of city administrations following Munich's lead.

Read More - Click Here!

Gimp Change Size Dimentions Scale Quality

Changing the Size (Dimensions) of an Image (Scale)

It’s a common problem that you may have an image that is too large for a particular purpose (embedding in a webpage, posting somewhere online, or including in an email for instance). In this case you will often want to scalethe image down to a smaller size more suitable for your use.

This is a very simple task to accomplish in GIMP easily.

The image we’ll be using to illustrate this with is The Horsehead Nebula in Infrared.

When you first open your image in GIMP, chances are that the image will be zoomed so that the entire image fits in your canvas. The thing to notice for this example is that by default the window decoration at the top of GIMP will show you some information about the image.

GIMP Scale Image Tutorial Nebula
View of the GIMP canvas, with information at the top of the window.

Notice that the information at the top of the window shows the current pixel dimensions of the image (in this case, the pixel size is 1225×1280).

To resize the image to new dimensions, we need only invoke the Scale Image dialog:

This will then open the Scale Image dialog:

GIMP Scale Image Tutorial Dialog
The Scale Image dialog.

In the Scale Image dialog, you’ll find a place to enter new values for Width and Height. If you know one of the new dimensions you’d like for the image, fill in the appropriate one here.

You’ll also notice a small chain just to the right of the Width and Height entry boxes. This icon shows that the Width and Height values are locked with respect to each other, meaning that changing one value will cause the other to change in order to keep the same aspect ratio (no strange compression or stretching in the image).

For example, if you knew that you wanted your image to have a new width of 600px, you can enter that value in theWidth input, and the Height will automatically change to maintain the aspect ratio of the image:

GIMP Scale Image Tutorial Dialog Scaled Values 
Changing the Width to 600px.

As you can see, entering 600px for the width automatically changes the height to 627px.

Also notice I have shown a different option under Quality → Interpolation. The default value for this is Cubic, but to retain the best quality it would better to use Sinc (Lanczos3).

If you want to specify a new size using a different type of value (other than Pixel size), you can change the type by clicking on the “px” spinner:

GIMP Scale Image Value Types 
Changing input value types.

A common use for this could be if you wanted to specify a new size as a percentage of the old one. In this case you could change to “percent”, and then enter 50 in either field to scale the image in half.

Once you are done scaling the image, don’t forget to export the changes you’ve made:

to export as a new filename, or:

to overwrite the original file (use caution).

For more detail about using Scale Image, you can see the documentation.

Changing the Size (Filesize) of a JPEG

You can also modify the filesize of an image when exporting it to a format like JPEG. JPEG is a lossy compression algorithm, meaning that when saving images to the JPEG format, you will sacrifice some image quality to gain a smaller filesize.

Using the same Horsehead Nebula image from above, I have resized it to 200px wide (see above), and exported it using different levels of JPEG compression:

GIMP JPEG compression example different quality 
Comparison of different JPEG compression levels.

As you can see, even at a quality setting of 80, the image is significantly smaller in filesize (77% size reduction), while the image quality is still quite reasonable.

When you’ve finished any image modifications you are doing, and are ready to export, simply invoke the export dialog with:

This will invoke the Export Image dialog:

GIMP JPEG compression export name filetype dialog 

You can now enter a new name for your file here. If you include the filetype extension (in this case, .jpg), GIMP will automatically try to export in that file format for you. Here I am exporting the image as a JPEG file.

You can also navigate to a new location on your computer through the Places pane, if you need to export the file to a different location. When you are ready to export the image, just hit the Export button.

This will then bring up the Export Image as JPEG dialog, where you can change the quality of the export:

GIMP Export JPEG options dialog 

From this dialog you can now change the quality of the export. If you also have the “Show preview in image window” option checked, the image on the canvas will update to reflect the quality value you input. This will also enable the “File size:” information to tell you what the resulting file size will be. (You may need to move some windows around to view the preview on the canvas in the background).

When you are happy with the results, hit the Export button to export.

To see more details about exporting different image formats, see Getting Images out of GIMP in the manual.

Gimp Crop an Image

Crop an Image

There are numerous reasons you may want to crop an image. You may want to remove useless borders or information for aesthetic reasons, or you may want the focus of the final image to be of some particular detail for instance.

In a nutshell, cropping is just an operation to trim the image down to a smaller region than what you started with:

GIMP Crop Example 
Original image (left), cropped image (right)

The procedure to crop an image is straightforward. You can either get to the Crop Tool through the tools palette:

GIMP Crop Tool Palette 
Crop Tool on the Tools Palette.

Or you can access the crop tool through the menus:

GIMP Crop Tool cursorOnce the tool is activated, you’ll notice that your mouse cursor on the canvas will change to indicate theCrop Tool is being used.

Now you can Left-Click anywhere on your image canvas, and drag the mouse to a new location to highlight an initial selection to crop. You don’t have to worry about being exact at this point, as you will be able to modify the final selection before actually cropping.

GIMP Crop Tutorial Example first pass 
Initial pass with the Crop Tool.
Crop Tool options (left), cropping on the canvas (right).

After making the initial selection of a region to crop, you’ll find the selection still active. At this point hovering your mouse cursor over any of the four corners or sides of the selection will change the mouse cursor, and highlight that region.

This allows you to now fine-tune the selection for cropping. You can click and drag any side or corner to move that portion of the selection.

Once you are happy with the region to crop, you can just press the “Enter” key on your keyboard to commit the crop. If at any time you’d like to start over or decide not to crop at all, you can press the “Esc” key on your keyboard to back out of the operation.

See the documentation for more information on cropping in GIMP.

Another Method

Another way to crop an image is to make a selection first, using the Rectangle Select Tool:

GIMP rectangle select tool crop image 
Rectangle Select Tool.

Or through the menus:

You can then highlight a selection the same way as the Crop Tool, and adjust the selection as well. Once you have a selection you like, you can crop the image to fit that selection through:


Gimp Rotate and or Flip an Image

Rotate and/or Flip an Image

There may be a time that you would need to rotate an image. For instance, you may have taken the image with your camera in a vertical orientation, and for some reason it wasn’t detected by GIMP as needing to be rotated (GIMP will normally figure this out for you, but not always).

There may also be a time that you’d like to flip an image as well. These commands are grouped together under the same menu item:

Flip an Image

If you want to flip your image, the Transform menu offers two options, Flip Horizontally, or Flip Vertically. This operation will mirror your image along the specified axis. For example, here are all of the flip operations shown in a single image:

GIMP flip image samples 
All flips applied to base image (top left).

Rotate an Image

Image rotation from the Transform menu is contrained to either 90° clockwise/counter-clockwise, or 180°.

Don’t mis-interpret this to mean that GIMP cannot do arbitrary rotations (any angle). Arbitrary rotations are handled on a per-layer basis, while the image rotation described here is applicable to the entire image at once.

GIMP rotate image samples 
Original (top left), 90° clockwise (top right)
90° counter-clockwise (bottom left), 180° (bottom right)

Gmail Email Lists or Groups

(Laura, Teresa, RosieCat23, BR and 3 others @ WikiHow) To create a 'Mailing List' in GMail saves a lot of time and efforts  for mailing  a mass e-mail. Mailing List  in Gmail is called Group. This is how we do it...



  1. 1

    Open GMail, and click a "GMail" button on the left,





  2. 2

    Click Contacts option,





  3. 3

    Click 'New Group' option to open it, give a  name  to a new group, for example 'Work', then click OK to create it,





    • You will see a new group shown on the left among the Contacts.
  4. 4

    Click on 'Work' group to open it, 





  5. 5

    Click on the icon (see image) ... it will open a window to start adding contacts to this group

    • For example, you have a contact Hennesy ...Type 'Hennesy' in the window ... Hennesy's e-mail will pop up... click on it.
    • Click Add button.  
  6. 6

    Next, for example, you have Ella in your contacts to add to the group ...

    • Type 'Ella' in the window ... Ella's e-mail will pop up ... click on it
    • Click 'Add' button.
  7. 7

    Continue the same procedure until you enter all your  work contacts, their e-mails also will be shown, 





  8. 8

    A new group 'Work' will be shown on your Contacts with a number of contacts in parenthesis,





  9. 9

    At this time click  'Contacts' button, then 'GMail'  option to return to the original   page,





  10. 10

    Click 'Compose' button to start composing  an e-mail  for  the  new  'Work' group 





    • Enter a word 'work' in the "TO:" field - it will bring  a 'Work group'  up into the window,  click on it,
    • All your work contacts will be automatically inserted into the "TO:" window.


Handbrake Install Debian and Ubuntu

As you may know, HandBrake 0.9.9 has been just released. For those who don’t know, HandBrake is an open-source multiplatform multithreaded video transcoder. It is used for converting DVD or Bluray discs to formats like MP4, MKV, H.264, MPEG-4 or other formats. You can also encode audio files like AAC, MP3, Flac, AC3 etc.

Let’s start the installation. Because we have a PPA available for HandBrake 0.9.9, all we have to do is:

Add the repo:

$ sudo add-apt-repository ppa:stebbins/handbrake-releases

$ sudo add-apt-repository ppa:stebbins/handbrake-releases

Update the repos:

$ sudo apt-get update

$ sudo apt-get update

Install Handbrake:

$ sudo apt-get install handbrake-gtk

$ sudo apt-get install handbrake-gtk

How Heartbleed flaw works and what you should do

Photo(Mark Huffman @ ConsumerAffairs) Heartbleed, of course, is the latest security flaw to put consumers' personal information at risk, and from the news accounts you've read, you probably get the idea it's serious business.

It all stems from a small mistake in updated code for Open SSL, the encryption system web services like Facebook, Amazon, Google – you name it – use to protect sensitive data.

Back in late 2012 or early 2013 one of the coders working on Open SSL made a mistake. It involved the communication between a user's computer and the server using Open SSL.

The two computers talk to one another from time to time to make sure they are still connected. The user's computer gives a couple of letters of a word – “potato,” for instance – and asks the server to send it back, specifying that the word is six characters long.

Critical step left out

But the person writing the code did not put in the part of the code specifying the number of characters in the word it was looking for. Adam Allred, research technologist at the Georgia Tech Information Security Center (GTISC), says that small oversight resulted in a huge security breach.

“Someone could then say 'send me back the word potato, but it's 500 characters long.' So the server, being none the wiser, sends back the word potato in the first six characters and then sends the next 494 characters, whatever they happen to be, after the word potato,” Allred told ConsumerAffairs.

This information is almost always encrypted as it moves over the network, but then de-encrypted and set down in the server's memory, right next to the word potato. In most cases those characters make up things like user names and passwords.

The flaw went unnoticed for months. Then, a highly skilled computer technician figured it out.

“At this point, a week later, the skill level needed to exploit the Heartbleed vulnerability is much lower,” Allred said.

One in five chance

As it turns out only about 17% of Internet servers use the flawed version of Open SSL, so as a consumer you have a one in five chance that the password-protected web servers you visit are affected. Still, Allred says consumers have a right to be concerned.

“As a consumer you have to think about every website you go to that uses 'https,'” he said. “For every one of those websites you have to ask, were they vulnerable and if they were, you need to change your password for those sites. But you have to do it after those sites patch.”

That bears repeating. Don't change your password for that site until it has been patched.

How do you know which sites were affected and which ones have been patched? Fortunately, that information is readily available online. Mashable, for example, maintains this list of updated sites.


Are there any lessons to be learned from the Heartbleed security flaw? Allred sees one big one.

“What I would like to see happen is a new awareness that an extremely important set of code that so many people in the world rely on and don't even know it, is being developed by very few people with very little money,” Allred said. “Many of the people writing code are volunteers. The Open SSL Project has one full-time employee.”

That's right. This critically important part of the Internet infrastructure is basically a volunteer operation.

Allred suggests Google, Facebook and other web giants have a vested interested in making the system more secure, and should start investing money into Open SSL, to provide the people and infrastructure to build better software.


How To Buy A Tablet

(Edward Baig, USA TODAY) While the iPad is still king, consumers have an increasing number of great tablets to choose from, with all sorts of sizes, prices and features. A buying guide for shoppers with tablets on their minds.

Not all that long ago, you couldn't imagine wanting a tablet computer. Your smartphone and laptop met all your computing requirements, or so you figured. And tablets were those awkward, stylus-driven computers that were pushed for years by Bill Gates at Microsoft, with very little to show for it.

Today, you're in crowded company if a tablet computer tops your holiday wish list.

Nearly 6 in 10 shoppers surveyed recently by the PriceGrabber price-comparison site said they'd rather receive a tablet computer than a laptop. And 71% said that tablets would replace e-readers as gifts this year.

Forrester Research analyst Sarah Rotman Epps forecasts that tablets will reach 112.5 million U.S. consumers — one-third of the U.S. adult population — by 2016. The market was practically non-existent as recently as early 2010.

That was right before everything changed with the arrival of the iPad, still the finest mainstream tablet out there.

The last two generations of full-size iPads boast knockout Retina display screens. More than 275,000 apps have been especially produced for Apple's slate, way more than the apps that have been optimized for any other platform. The iOS software behind Apple's tablet is generally friendlier than competitors' software.

MORE:'s best tablets of 2012

MORE:'s detailed look at the iPad Mini

BAIG: Nexus 10, Nook HD are solid tablet choices

BAIG: Microsoft Surface RT an impressive piece of engineering

But there's no shortage of rivals trying to dethrone the market champ, with most challengers to date relying on some variant of Google's Android operating system. Gartner predicts that 219 million tablets sold worldwide in 2016 will run Apple's operating system, compared with about 109 million for Android.

Google's own Nexus 7 (made by Asus) and Nexus 10 (made by Samsung) models lead the Android brigade and run the current flavor of Android called Jelly Bean. You still see some Android tablets running the previous version, Ice Cream Sandwich.

The Google Nexus 10 tablet offers a lot for the price tag.(Photo: Chris Thomas,

Specs-wise, the Nexus 10 boasts an even higher-resolution 10-inch display than the iPad, though you have a difficult time detecting much of a difference in a side-by-side comparison.

On the various Kindle Fire tablets that Amazon sells and the Nook tablets sold by Barnes & Noble, Android is present but shoved in the background and barely recognizable, replaced by those companies' own user interfaces.

A fresh challenge is coming from another flank, the radically different Windows 8 operating system driven by Microsoft and its PC partners. Windows 8 is designed not only for multi-touch tablets but traditional desktop PCs and laptops, a controversial and somewhat confusing decision for Microsoft that differs from the approach Apple is taking. Despite overlapping features, Apple is keeping the OS X operating system (for Macs) and iOS (for the iPad and iPhone) separate.

NPD data suggest a very slow start for Win 8 tablets so far, with market share of less than 1%.

The latest crop of tablets from all comers brings screen sizes that are typically in the 7- to 10-inch range, though you see some displays that are a bit smaller or larger. The trade-off to shoppers is obvious. Do you want more screen real estate? Or a lighter machine you might be able to stash in a pocket?

Apple's popular iPad has a 9.7-inch display; the recently added iPad Mini has a 7.9-inch screen.

Meanwhile, smartphone screens are in some instances expanding so greatly, that they are inhabiting territory occupied by smaller tablets. The Samsung Galaxy Note II, more phone than tablet despite the use of a souped-up S-Pen stylus, has been nicknamed a "phablet."

Prices are big and small, too, with most new tablets from well-known companies at a rough starting point of $200. Knockoffs from companies you've never heard cost even less, though they usually aren't as snappy or have screens that quite measure up.

At the other extreme, the current top of the line iPad commands $829, in addition to monthly fees you might incur for fast cellular data service.

Serving laptop masters

Tablets, of course, serve many masters. You use them to browse the Web, watch high-definition movies and TV shows, play games, chat over video, shoot pictures, catch up on e-mail, read books and periodicals, and otherwise entertain, educate and in some cases let you get work done.

The iPad excels in all of these areas. But rivals are making inroads and producing strong alternatives that are well worth considering, depending on how you're most likely to employ them:

• Tablets for readers. If reading is your passion, you have solid reasons to stick with dedicated eReaders, such as the various monochrome Kindle and Nook models from and Barnes & Noble, respectively, which some folks don't even consider to be tablets.

Electronic readers lack the color pizzazz of their multimedia siblings, but they're comparatively inexpensive, provide superb battery life and terrific glare-free screens, and they let you tap into enormous virtual bookstores.

Still, there's a reason that and Barnes & Noble are aggressively pushing the more versatile color tablets in their lineups. In many respects, the 7-inch $199 Kindle Fire HD, for example, is your entrée into Amazon's vast digital storefront. Collectively, Amazon offers more than 22 million movies, TV shows, songs, magazines and apps. On certain audio books from the company's Audible service, you can exploit a feature called Immersion Reading, in which text on the screen is highlighted while you hear professional narration. Another feature called Whispersync for Voice lets you read a Kindle book on the Fire and pick up where you left off on a corresponding audio book.

Amazon describes its X-ray for Books feature that is in the Fire as a way to "explore the bones of a book." It helps you find all mentions of characters, places and terms used in a book. A similar X Ray for Movies feature lets you peek at information about the actors in a scene you are watching culled from the Amazon-owned IMDb service.

Notwithstanding all the other things they can do, smaller-screen tablets such as Kindle Fire HD and Nook HD or HD+ generally work better for heavy-duty book readers than their large-screen brethren. It's also one of the reasons the iPad Mini ($329 on up) is arguably better for reading what Apple refers to as iBooks, than the full-size iPads — you don't mind that the Mini display, while perfectly fine, isn't as sweet as the full-size iPad's Retina display. Google's Nexus 7 ($199 on up) is also an excellent eBook reader. What's more, it's more than an eReader: It's a budget tablet that performs like a pricey machine.

It's worth pointing out, though, that Amazon, Barnes & Noble, Google and others make free apps available that will let you read their eBooks not only on those companies' own devices but on the iPad and other tablets as well.

• Tablets for business. A 2011 Forrester study found that of 10,000 information workers in 17 countries, 24% of workers in small businesses use a touch-screen tablet for work. While the iPad is primarily known as a play device, Apple has been pushing the tablet's business virtues. Apps can turn the iPad into a hub for capturing digital signatures, telephony and communications, and computer-aided design (among numerous other productive purposes).

But a chief drawback to using the iPad for work, and for that matter most touchscreen tablets, is the lack of a physical keyboard.

That's where many of the Windows 8 convertibles are trying to gain a competitive edge. They double as traditional laptops reliant on keyboards and mice, as well as touch tablets that rely on your fingers as the pointing device. The Lenovo Yoga 13, for example, is a $999 Windows 8 hybrid that via a 360-degree hinge can be contorted into four distinct modes: as laptop, stand, tent and yes, tablet. For all its PC-type virtues, which include an excellent screen and solid keyboard, however, it is a bit more clumsy to use in tablet mode.

Microsoft's magnesium Surface, starting at $499, is impressive hardware. It represents the first time Microsoft has built its own personal computer. Anyone wanting to use it for work, though, should seriously consider spending the extra $100 or so for a clever keyboard cover accessory that provides a very usable keyboard. It nicely complements the Windows 8 touch screen.

The confusing element here comes with the Windows 8 software. The first Surface tablets run a Windows 8 variant known as Windows RT, which relies on ARM processors and promises decent battery life. Surface RT comes with multitouch versions of Word, Excel and PowerPoint, but is overall short on apps. Worse, it can't run any of the legacy Windows software you've been using forever.

The Windows 8 Pro version of Surface that is coming in January will run those older programs, and it relies on generally more robust Intel processors. But it's heavier (2 pounds vs. 1.5 pounds) and, at $899, costs a lot more.

Meanwhile no matter which way you go, expect a learning curve as you get comfortable with the live interactive tiles that decorate the Windows 8 Start screen, a major departure from the Windows you grew up with. The traditional Start menu has gone AWOL, though some screens in this new Windows environment will look somewhat familiar under certain circumstances.

• Kids. There's a swell chance Junior wants to use your iPad, if not get one of his own — it's loaded with appealing games and apps for kids. Still, parents might instead want to consider tablets that were especially produced for the youngest members of the household. And a slew of companies are jumping into that playground, including venerable toymakers such as LeapFrog, Fisher-Price and Toys R Us, which produces the $150 Tabeo.

The $200 Nabi 2 tablet from Fuhu is representative of the category. It's a 7-inch, 1.3-pound Android tablet running off an NVIDIA quad-core processor. Housed in a red protective border, the tablet is preloaded with kids music, games and promises educational lessons tailored for K through 5 students. A Chore List app lets parents create tasks for their children ("be nice," "make new friends") and assign them rewards. Open the Web browser and you find buttons that take you to sites such as National Geographic Little Kids, Crayola Kids and Disney Fairies. Sites for grown-ups are kept off-limits. For $2.99 a month, you can stream kid-safe TV to the tablet.

The $150 Oregon Scientific Meep, also a 7-inch Android kids tablet, comes with a protective orange silicon bumper. It is targeted at 6-year-olds on up. Art and learning apps are preloaded, along with Angry Birds. Parents can tap into a cloud-based parental control portal without having to snatch the tablet away from their kids.

While a kid-specific tablet may be just what Mom and Dad ordered, Nooks, Kindles and iPads also come with built-in parental controls.

The iPad is clearly still the tablet to beat. But consumers have an increasing number of worthwhile tablets to choose from, with all sorts of sizes, prices and features.

Read More - Click Here!

How To Lock down Linux by Steven J. Vaughn-Nichols -

Linux is, by design, a very secure operating system, but so what? You can have the best security system in the world on your house, but if you leave your front-door open anyone can still walk in. Even people who know better, like Linux kernel developers, blow it sometimes. That’s what happened to the Linux Foundation’s constellation of sites. Multiple important Linux sites were down for weeks and as of October 3rd, is still down. This doesn’t have to happen to you. Here are a few simple suggestions from me, and some more advanced ones from Greg Kroah-Hartman, one of Linux’s lead developers.

First, here are some rules that everyone should know. Number one with a bullet is security expert Bruce Schneier’s mantra, “Security is a process, not a product.” I don’t care that your server was Fort Knox, two weeks ago, if you haven’t updated your system with the latest security patches, checked to make sure your users haven’t started running a porn Web server, and looked over your network logs to see if someone or something isn’t up to mischief then you can’t trust your system today.

In addition, as Kroah-Hartman wrote, “it is imperative that nobody falls victim to the belief that it cannot happen to them. We all need to check our systems for intrusions.” And, I might add, we need to keep doing it all the time.

Therefore, make darn sure that your root password, which should really be a passphrase, not a password, isn’t been being used by anyone than you. If your users really need fuller access than they usually get to the system, provide them with sudo access.

Thinking of users: Lock them down. Give them only as much permission and access as they absolutely must have. If it turns out they need access to say a group file directory give it to them after they’ve shown a need for it, not before. While you’re at it, set their home directories to be encrypted.

Moving on to the network, every system connected to the Internet needs a firewall set up to, once again, give users the absolute minimum of needed access. If someone doesn’t need to use a network port, that port should be blocked. Period. End of statement.

That’s all security 101 stuff. Kroah-Hartman gets into more technical detail. Still, what he’s suggests doesn’t require you be some kind of security ninja. You just need to know and practice some Linux administration basics.

For starters if you have any suspicion that your system has been compromised Kroah-Hartman suggests that you need a clean install of your operating system. If, you have everyone’s home directories in a separate home partition-which you should-you can reinstall your operating system during an idle period and no one will even be the wiser that everything has been refreshed.

After that, Kroah-Hartman suggests that you “verify that your package signatures match what your package manager thinks they are. To do this on a rpm-based system, [such as Red Hat or openSUSE] run the following command:

rpm --verify -all

“Please read the rpm man page for information on how to interpret the output of this command.” On Debian-Linux based systems, such as Mint or Ubuntu, it’s more complicated. From a Bash shell you need to run the following:

dpkg -l \*|while read s n rest; do if [ "$s" == "ii" ]; then echo $n;

fi; done > ~/tmp.txt

for f in `cat ~/tmp.txt`; do debsums -s -a $f; done

Let’s say you find a program that smells funny, you’ll want to get rid of it and install a fresh version. To do this, stop the program from running with, using the Secure Shell (ssh) daemon, with the following command:

$ /etc/init.d/sshd stop

and then re-install the suspect program.

You should also get into the habit of not just glancing over your startup scripts and system logs from inside your operating system–You are already doing that right? Right!?–but taking your system down, rebooting it with a live CD Linux distribution, and checking for rogue start-up scripts and odd log entries. For this kind of work, I prefer to use a Linux distribution like SystemRescueCd, which are designed for repair work, to look a system over for problems. You can use any live CD distribution though and if you’re happy with your main Linux, there’s no reason not to say use a live Ubuntu USB-stick or CD to over an Ubuntu system.

If you do all this, well you can still be cracked if an expert is targeting you, but you’ll be a lot safer from run-of-the-mill crackers and their automated programs. Good luck and stay safe out there. Even for Linux users, it’s a dangerous old Internet out there.

How To Use CKEDITOR with IMCE for Drupal


CKEditor is an online rich text editor that can be embedded inside web pages. It is a WYSIWYG (What You See Is What You Get) editor which means that the text edited in it looks as similar as possible to the results end users will see after the document gets published. It brings to the Web popular editing features found in desktop word processors such as Microsoft Word and Writer.

All you need, in a unique solution

CKEditor provides all features and benefits users and developers expect from modern web applications. Its amazing performance keeps users focused on things to be done. CKEditor includes all popular features available in text editors as well as additional components specially designed for web contents.

Fully compatible

One of the strongest features of CKEditor is its almost unlimited compatibility. It's a JavaScript application, so it simply works with all server technologies, just like a basic textarea element. As far as the client side is concerned, CKEditor has been developed to be compatible with all browsers that dominate the market, namely Internet Explorer, Mozilla Firefox, Google Chrome, Safari, and Opera. It will even work in the infamous Internet Explorer 6!

It's simply the best

We are not claiming that CKEditor is the best editor there is. Our customers and community are saying that, loudly. People enjoy working with the editor, fully trusting it, backed up by our strong and longtime market presence. Visit the CKEditor web site to know more about it, play with the demo and enjoy it!


We're not affirming that CKEditor is the best editor out there. Our customers and community are saying that, loudly. People enjoy working with this editor, fully trusting it, backed up by our strong and long time market presence. Visit the CKEditor web site to know more about it, and enjoy it!

CKEDITOR Users Guide

Adding Images From Your Desktop

The first step is to go to the CKEDITOR toolbar and click on the image icon 

The following dialog box, titled "Image Properties", will appear:

Click the blue "Browse Server button, which will lead you to the next dialog box:

Click "Upload" and the "File" dialog box will appear:

Click "Browse" and find the desired image file on your computer

If you wish to create a thumbnail, click the desired checkbox, then click Upload.

The previous dialog box will appear again, but this time your uploaded images files will be listed and highlighted, as follows:

Double Click the desired image file. The "Image Properties" dialog box will appear again, but this time it will contain information about the chosen image file:

The selected image can be resized simply by changing the Width or Height. If you change one, the value for the other will change so that the image is proportional. 

You can place a value in Border, HSpace, and VSpace to add padding.

Alignment allows for placement of the image to the right or left of the text. If you wish to center the image, simply leave "Alignment" as <not set>, then use "Center" from the tool bar.



CKEDITOR Quick Reference

Below you will find an overview of all features available in the default CKEditor toolbar.

Working with a Document

Toolbar Button Description
Source View or edit the source code of the document (for advanced users). See Document Source.
Save Save the contents of CKEditor and submit its data to the server, when CKEditor is placed inside an HTML form. See Saving Content.
New Page Clear the editing area and create a new page. See Creating a New Page.
Preview Show a preview of the document in the shape that will be displayed to end users. See Document Preview.
Templates Select a layout template. See Templates.
Cut Cut the selected text fragment to the clipboard. See Cut.
Copy Copy the selected text fragment to the clipboard. See Copy.
Paste Paste content copied to the clipboard along with formatting. See Paste.
Paste as plain text Paste content copied to the clipboard without formatting. See Paste as Plain Text.
Paste from Word Paste content copied from Microsoft Word or similar applications along with formatting. See Paste from Word.
Print Print document contents. See Printing.
Insert Page Break for Printing Insert a page break. This only impacts the printed version. See Page Breaks.
Check Spelling Spell Check As You Type Check spelling of the document text or turn on the Spell Check As You Type (SCAYT) feature. See Spell Checking.
Undo Redo Undo or redo the most recent action performed. See Undo and Redo.
Find Find a word or phrase in the document. See Find.
Replace Find and replace a word or phrase in the document. See Replace.
Select All Select all contents of the document. See Text Selection.
Remove Format Remove the formatting of the selected text. See Remove Format.
Maximize Maximize the editor in the browser window. See Resizing and Maximizing CKEditor.
Show Blocks Highlight all block-level elements in the document. See Show Blocks.
About CKEditor Show information about CKEditor. See CKEditor Version.

Text Styling

Toolbar Button Description
Bold Italic Underline Strike Through Apply bold, italic, underline or strike-through formatting to the text. See Bold, Italic, Underline, and Strike-through.
Subscript Superscript Apply superscript or subscript formatting to the text. See Subscript and Superscript.
Formatting Styles Apply pre-defined combinations of various formatting options to block and inline elements. See Formatting Styles.
Paragraph Format Apply pre-defined block-level combinations of various formatting options. See Paragraph Format.
Font Name Change the typeface of the text. See Font Name.
Font Size Change the font size of the text. See Font Size.
Text Color Change the color of the text. See Text Color.
Background Color Change the background color of the text. See Background Color.


Text Layout

Toolbar Button Description
Decrease Indent Increase Indent Increase or decrease text indentation. See Text Indentation.
Block Quote Format a block of text as indented quotation. See Block Quote.
Create Div Container Create a new divelement in document source. See Creating Div Container.
Align Left Center Align Right Justify‎ Set text alignment (left, centered, right or justified). See Text Alignment.
Text direction from left to right Text direction from right to left Set text direction as from left to right (default value for most Western languages) or from right to left (languages like Arabic, Persian, Hebrew).
Insert Horizontal Line Insert a divider line (horizontal rule) into the document. See Horizontal Line.

Rich Text

Toolbar Button Description
Insert/Remove Numbered List Insert/Remove Bulleted List Create a numbered or bulleted list. See Creating Lists.
Link Unlink Create or remove a hyperlink in the text. These features may also be used to manage file uploads and links to files on the web server. See Links, E-Mails and Anchors.
Anchor Insert a link anchor to the text. See Anchors.
Image Insert an image into the document. See Inserting Images.
Flash Insert an Adobe Flash object into the document. See Inserting Flash.
Table Create a table with the defined number of columns and rows. See Creating Tables.
Smiley Insert an emoticon image (smiley or icon). See Inserting Smileys.
Insert Special Character Insert a special character or symbol. See Inserting Special Characters.
IFrame Insert an inline frame (iframe). See Inserting IFrames.

Form Elements

Toolbar Button Description
Form Insert a new form into the document. See Creating Forms.
Checkbox Insert a checkbox into the document form. See Checkbox.
Radio Button Insert a radio button into the document form. See Radio Button.
Text Field Insert a text field into the document form. See Text Field.
Textarea Insert a multi-line text area into the document form. See Textarea.
Selection Field Insert a selection field into the document form. See Selection Field.
Button Insert a button into the document form. See Button.
Image Button Insert an image button into the document form. See Image Button.
Hidden Field Insert a hidden field into the document form. See Hidden Field.


How Windows XP end of support sparked one organisation to shift from Microsoft

The withdrawal of support for XP helped one organization decide its best option was a move away from Microsoft Windows as its main operating system.

( @ zdnet) There are the XP diehards, and the Windows 7 and 8 migrators. But in a world facing up to the end of Windows XP support, one UK organization belongs to another significant group — those breaking with Microsoft as their principal OS provider.

Microsoft's end of routine security patching and software updates on 8 April helped push the London borough of Barking and Dagenham to a decision it might otherwise not have taken over the fate of its 3,500 Windows XP desktops and 800 laptops.

"They were beginning to creak but they would have gone on for a while. It's fair to say if XP wasn't going out of life, we probably wouldn't be doing this now," Barking and Dagenham general manager IT Sheyne Lucock said.

Around one-eighth of corporate Windows XP users are moving away from Microsoft , according to recent Tech Pro Research.

Lucock said it had become clear that the local authority was locked into a regular Windows operating system refresh cycle that it could no longer afford.

"If we just replaced all the Windows desktops with newer versions running a newer version of Windows, four years later we would have to do the same again and so on," he said.

"So there was an inclination to try and do something different — especially as we know that with all the budget challenges that local government is going to be faced with, we're going to have to halve the cost of our ICT service over the next five years."

Barking and Dagenham outsourced its IT in December 2010 to Elevate East London, which is a joint-venture between the council and services firm Agilisys

How to setup a multilingual website with Drupal 7

Setting up a basic Drupal website in English is relatively easy. Setting up a multilingual website isn't as obvious as you would hope it to be. Knowing a thing or two about how and where to find help on is a must. See References for more information.

The present article does not address the topic of multilingual menus.

Alright, let's do it.

  1. Starting with a fresh Drupal 7 install is the best way to avoid problems. That said, it is possible to transform a unilingual website into a multilingual one. Let's just say that it is beyond the scope of the current article.

  2. Activate the Locale and Content translation modules. Both come installed with Drupal 7.

  3. Download, install and activate the i18n and Variable modules (and all their submodules). The Variable module is new and required by i18n in D7. It provides a simple interface where you can designate system variables as Multilingual variables. In D6, you have to do it by hand in the settings.php file (See References block). More on the usefulness of the Variable module a little later.

  4. Go to the languages interface (admin/config/regional/language) and add a new language to the list. In this article, I'll be adding French.

  5. Now, add a Path prefix language code to each language. You can do this by clicking the language edit link (admin/config/regional/language/edit/en). In the current example, I've added "en" for English and "fr" for French. Note the warning: Modifying this value may break existing [node] URLs. I had created a couple of nodes prior to making this change and encountered many nagging problems related to bad links. Deleting and recreating all existing nodes solved all those problems. If you have many existing nodes, export them before deleting them. Then import them back. You can do this more easily with the Node export module ( Also, make sure the option "Determine the language from the URL (Path prefix or domain)" is enabled at admin/config/regional/language/configure.

  6. Once this is done, go to the Drupal translation page ( and download the translation package for the language you just added. In this article, French for Drupal core 7.x. Look for the Download link which appears to the right of Drupal core 7.x. If you hover over this link, you will see that is points to a file named Download this file to your desktop.

  7. Go to admin/config/regional/translate and click on the Import tab. Import the translation package you've just downloaded into the desired language ( into French for this article). You may need to import other project (module) packages (e.g. Views, Panels, etc.). But you can do that later.

  8. Go back to the Overview tab. Note the higher percentage for translated strings for the language you added.

  9. Next, we will activate the Language switcher block which will allow users to switch between languages. Go to admin/structure/block. Look for Language switcher and set the region (sidebar first in the present case). Click the Save blocks button at the bottom of the page. Goto your Home page and check that the block is showing. Clicking the different languages will switch the interface back and forth between them.

  10. The last step is to enable Multilingual support, with translation for each of the content type that you require. For the Article content type, go to admin/structure/types/manage/article and click on the Publishing options tab (the horizontal ones) and activate the Enabled, with translation radio button. Then click the Save content type button. Repeat for all content type that need a language specific translation.

That's it. Your website is multilingual ready. Now every time you create a new node (Article, Page, etc.), you'll have to specify which language it belongs to. Language neutral nodes will be displayed just the same for all languages. That's why this article appears in English as well as in French. Check it out.

Inkscape Logo Maker

Picture of 10 Steps to your own logo in Inkscape (free) and using Creative Market for dummies like me
pot shape left tassel text in frame stroke change enlarged.png
pot shape left tassel text in frame.png


Sorry for the long title. I have a few confessions too. First, my drawing and designing skills are not even equivalent of a finger painter. Second, I don't make money, so I always have to make things that can be bought, including website logo graphic. Third, I don't have professional illustrator software or input tablet, not even a mouse, just a laptop with a touch pad (don't even know that's the right technical name). That brings up my fourth confession: I'm almost computer illiterate. I get things done on computer if I need to, such as writing Instructables, uploading photos and Youtube videos but I don't understand a thing how computer works, keep forgetting how to upload Youtube videos, etc.

With that said, if you're a graphic designer, If I haven't lost you yet, except for voting or not voting this Instructable for the Graphic Design Contest, maybe checking out my cooking Instructables is a better thing to do. As far as graphic design is concerned, If you're a dummy and cheap-y like me, I intend to show you what I have learned very recently: How to design a logo (text, graphic, or text and graphic) in Inkscape (free) and using Creative Market.

I'm teaching myself Inkscape with the book: Inkscape: Guide to a Vector Drawing Program (4th Edition).

I checked out Creative Market (I highly recommend). If not for this contest, I didn't know it exists. I signed up for its blog (I highly recommend again). I downloaded a free font called Baystyle typeface by Ekloff and used it in the cover photo of this Instructable ( "A logo" is Baystyle typeface). It's my first time since digital age to download a typeface to my computer. I can't believe it worked.

Without further delay, hope you'll learn the following things quickly out of this Instructable: 
1. Download Inkscape for the right system bit of your computer.

2. Start Inkscape.

3. Set the drawing size.
4. Set up a Grid to guide drawing objects.

5. Edit text: font, size, color, letter/words horizontal and vertical spacing, letter/words oritention and rotation.

6. Import and trace image to use in your design.

7. Edit path (drawing).

8. Put text on path.
9. Flow text in a shape.

10. Save and export your design.

You can either follow each step exactly or be brave and follow my steps but use your ideas, before you know it, you may have designed your dreamed logo. If I can do it, you can do it.


Step 1: Download Inkscape

Picture of Download Inkscape

1.1 Find out the system type of your computer: Click Start of your computer, right click Computer, click Properties, in the properties window, in Systemsection, after System type, it says 64-bit operating system or 32-bit operating system.

1.2 Download Inkscape for your computer bit system: Go to, click 64bit: installer (msi) if your computer is windows 64 bit, click save to download Inkscape to your computer, double-click the installer on your computer when done downloading to install and following the installation wizard to complete installation.


Step 2: Start Inkscape

Picture of Start Inkscape

Double click Inkscape on desktop or click Inkscape in Start to start it.

Tip: It may take a few seconds for anything to happen. Recently I downloaded some stuff to my computer, it started to take very long for Inkscape to start. After I removed them, the time for Inkscape to start shortened significantly.

A quick glance of Inkscape window, places that are frequently used are:

On the top: menu bar, command bar, tool controls (specific to which tool is in use)

On the left: tool bar (the arrows at the bottom mean there are more tools than space available to display them)

On the right: snap bar

Center: Page ( Useful for setting an output region)

Bottom: Color Palette/swatches, (Fill and stroke) Style indicator, Notification region (very useful), Zoom (helpful to know)


Step 3: Set the drawing size

Picture of Set the drawing size

Logo sizes vary. What is the right logo size? Here is a visual reference of logo sizes:

For a text only logo with "How 2 design a logo" on it, I think 400 by 100 seems fine. To do that in Inkscape, click File, find Document properties in the file dialog, click Document properties, in Document properties window/dialog, under Pagetab, in Custom size section, set Units in px, place cursor at the end of Withdth and Height entries, use backspace, delete the default values and enter 400 for Withdth and 100 for Height, Enter after each entry. Inkscape automatically adds decimal points after your manual entry. Make sure in Display section, Show page border is checked.

Notice page size changing to your settings? If your page size seems strangely small or large, Zoom (lower right corner) may be small or large, place cursor at the end of zoom entry box, use backspace to delete the default zoom level and enter 100 or whatever zoom level you want and Enter to make your entry take effect as shown in second photo.


Step 4: Set up a Grid to guide drawing objects

Picture of  Set up a Grid to guide drawing objects

In the still opened (If you closed it, you can go to file, then document properties to open it again) Document properties window/dialog, under Grids tab, inCreation section, choose Rectangular grid, click New. Make sure Enabled,VisibleSnap to visible grid lines only are checked.


Step 5: Edit text

Click Text Tool in the tool bar (left side, with letter A), click anywhere in the page and type "HOW" or whatever you want to (1st picture).

To change text font, click on the arrow to the right of the font box, then click the down arrow key on your keyboard to go through the fonts available in your computer and notice the font changing? Stop at the font you like or keep pushing the arrow key to go over the fonts again and again. I stopped at serif (2nd picture).

To change text size, click on the arrow to the right of the size box, then examine the current grids the text occupies and choose the new size for the text. I chose 36 (3rd picture).

To change text direction and rotation (any degree), first you have to select the text by click in front of the text and drag over the text just like how you usually select in other programs (4th picture), then click on vertical text to set text vertical and then enter a rotation value to rotate the text (5th picture). I set vertical text and then rotate 90 degrees.

Next, click the select and transform objects tool to select the text as an object so you can move, scale etc. (6th picture). When mouse is over the selected text object, when you see the moveable sign (cross with four arrows), you can click and drag to move the text object (7th picture). I moved the text object so letter H is at the beginning of the page.

To reduce the spacing before "O" and "W", click the Text Tool, then select the text, set the letter spacing and vertical shift to negative values to shift the 3 letters together (8th picture). You may need to try a few values to set it look just right. If you don't like what you did, you can always undo (click edit in the menu bar, then choose to undo previous steps).

Repeat the above steps to enter and edit remaining text elements. To change font color, while the text is selected, click on any color swatch will do (9th picture).

You can set different font for individual letters/words, rotate letters/words, change horizontal and vertical spacing, etc (Letter "i" in word "design" was set to jokeman font, word "a" was rotated 60 degrees and then adjusted vertical spacing). (10th picture).

If text only logo is what you want and you're done with your logo design, you can go to step 10 of this Instructable for reference to save and export your design.


Step 6: Import and trace image to use in your design

By now I can draw a little in Inkscape but it's faster to use a photo for demonstration and sometimes you do want to use an image in your design. To do that, follow the previous steps 1-4 and set the page to 256 by 256 px. ClickFile, then Import (fig. 6.1), select the file to import, click open (fig. 6.2).

In the Import window/dialog, check "link", "From file", "None". If you place mouse over the options, explanation for the options are displayed and you can kind of understand what each option means (fig. 6.3). Then click open. Fig 6.4 shows how it may look like after you opened the file to import and it's automatically selected.

To scale and move the image to page, place mouse over one of the corner arrows and it turns green, click and drag to resize. Place mouse over the image pointer changes to cross with four arrows, click and drag to relocate the image to the page. Pay attention to the notification region with guide to what to do. (Fig. 6.5)

Next is when the magic happens. With the image selected (must for magic to happen), click Path in Menu bar, click Trace Bitmap to bring up the dialog. In the dialog, check Live Preview, in the Mode tab, check Brightness cutoff for this image, play around with the threshold to get the best traced image. Check Stack scans and Remove background in the Mode tab. At last, click OK. Again once mouse is over each option category, an explanation of the option is displayed and you can undo any previous steps (Fig. 6.6). In the book, it explains in detail the difference each option gives.

After that, you can close the trace bitmap dialog/window. Notice the newly generated path object is selected and the notification region says Path of 37 nodes (yours could be a different number depends on the image and tracing options) (Fig. 6.7).

Click and drag to move the newly generated path object to reveal the original image (Fig. 6.8).

Select and delete the original image since it's not necessary anymore for this project. Select and move the path object to the center of the page. Now it's ready for manipulation (Fig. 6.9).


Step 7: Edit path (drawing)

With the path object selected, click on any color swatch to change its color (Fig. 7.1).

With the path object selected, click Path, then Break apart (Fig.7.2). Notice it breaks into two paths. For a computer illiterate like me. it feels like magic. Now we can manipulate only one of the paths.

For this project, click anywhere outside the page area to deselect both, then click the top path to select (Fig.7.3). Click on it again the selection has a cross in the center (rotation point) (Fig. 7.4). Click and drag to move the rotation center to the left corner of the top (Fig.7.5).

Place mouse over the top right corner arrow and it turns green (Fig.7.6). Click and drag the arrow upward to tilt the top. You can also click and drag on the path to move it down and to the left a little after tilting (Fig.7.7).

Now to add hearts to the cap, click anywhere outside the page to deselect the path, click the Circle tool, click and drag anywhere in canvas to draw an eclipse, change its color to orange red, right click on the "number" in stroke indicator (located at bottom left corner, mine is 2, yours maybe different) (Fig. 7.8). ClickRemove to remove the outline on the eclipse. Notice the stroke indicator saysnone now. While the eclipse is selected, click Duplcate in Command bar to duplicate (It's on top of the original, you see only one), click select tool and move the top one to the right a nudge (Fig. 7.9).

Use skills learned in previous steps, tilt the two eclipse to form a heart shape. Select both (click to select one and then Shift +Click to select the other one), click Path, then Union to form one heart (Fig. 7.10).

Use skills learned in previous steps, duplicate two more hearts, scale them to different sizes, move them to the cap. Select the big heart, click Object, and thenLower to bottom to place heart in the cap (Fig. 7.11).

After that, if that's the design you want, you can go to step 10 for reference to save and export your design.


Step 8: Put text on path

To create something as Fig. 8.1, it involves creating a path, enter text, put text on path and then remove the path stroke.

To create a path curved as the bottom of the cap, it's better to duplicate the shape of the bottom. So select the bottom path, duplicate it, then click Object in Command bar, then Transform to precisely control objects' transformations (Fig. 8.2).

In the Transform window/dialog, under Move tab, enter -100 px for vertical, make sure Relative move is checked, then Apply to move the duplicated path down 100 px to be manipulated (Fig.8.3).

Click the Box tool, click and drag to draw a rectangle over the top of the moved down path, and then have both selected (Fig. 8.4).

Next magic happens again. Click Path, and then Difference to remove the top part of the path (Fig. 8.5).

Next click Fill and Stroke in Command bar, in the dialog/window, under Fill tab, click the X which will make the moon shape seemingly disappear, Don't panic. Next click the Stroke paint tab, and then the square with solid fill under it which will make the outline of the moon shape appear (Fig. 8.6).

Now you can close the Fill and Stroke window/dialog. Click Edit paths by nodestool, click to select one node at the tip of the moon shape, Shift and click to select the other node at the other tip of the moon shape at the same time (Fig.8.7).

Next click Delete segment between two non-endpoints nodes in Tool Control bar which will make the closed path to an open path (Fig. 8.8).

Now use skills learnt in previous steps, type the text, "The Wow Homemaker" for example, set its color to orange red, set font size to close to the size of the curve, (I set it at 14), select the text object and the open path (curve) at the same time, click Text, and then Put on path (Fig.8.9).

Use skills learned in previous steps to adjust the spacing among letters and/or words, use Shift+D+click to select path to stretch it to accommodate all the text. Then with the path selected, right click the Stroke width number, click Remove(Fig.8.10) which makes the curve disappear but the text retains curved shape.

Now select and move the text up old fashion way or use Transform to move it up precisely vertically (Fig.8.11).

If you're done with your logo design, you can go to step 10 for reference to save and export your design.


Step 9: Flow text in a shape

Picture of Flow text in a shape

To flow text in a shape and have text fill the shape, I learned that it's better to know the dimension of the shape to estimate how much text it can hold. Select the shape, Tool Control shows the width and height info of the selection (Fig. 9.1).

Use skills learned in previous steps to remove fill and add stroke for the paths. Click Text Tool, Click and drag a box of approximate size of the shape to flow text in anywhere on canvas, type the text in the box, change the font size and color if necessary. Select the text object, the bottom path and then the top path (when text is flowed in multiple shapes, it flows first to the last selected shape), click Text, and then Flow into frame (Fig.9.2).

With the Text Tool selected, you can click in the text and then adjust spacing, type more text or delete to have a better distribution of the text in the frame (Fig.9.3).

Click anywhere outside the drawing to exit the Text Tool. Click Select Tool, click and drag a box that includes everything in the drawing (notification region should say how many objects are selected). Click Objects, and then Group to group objects to one group. This is the last thing what I do before save and export the file and call it a day (9.4).


Step 10: Save and export your design

Picture of Save and export your design
pot shape left tassel text in frame stroke change enlarged.png
pot shape left tassel text in frame.png

It's always a good idea to save your design in SVG format in Inkscape so you can edit it later if you need to and you're at an advanced point not scratch. Clickfile, then Save as, in the pop up window, Select Save in folder, give a file name and Save (Fig.10.1).

It's also important to export your design to use in other programs or on the web which not always supports SVG file. To export your design, select your design, click File, and then Export PNG Image, in the dialog, choose Selection underExport area (page and drawing may include the space around the design I think. You can play around with the options to see), make sure under Filename to the left of Export As it is in the folder and given filename you want, and then clickExport (10.2).

Viola! You just designed your own logo from scratch to complete!

I hope this Instructable help you learn the basic techniques of using Inkscape to design a graphic in a short time. If it is helpful, please vote it for the Graphic Design Contest. 

Install PhotoScape 3.6 on Linux


(M Riza @ Oa Ultimate) Photoscape 3.6 is a simple and easy free photo editing which only run on Windows. Through Wine, we can run Photoscape for Linux OS and in my case Ubuntu 11.10.  I did installed Photoscape on my Ubuntu 11.10 with no problem, but took 2 days for me to make it run.

Photoscape 3.6 need Visual C++ 2008 libraries to run, and Visual C++ 2008 libraries need access to MSI service in order to install, and because of those libraries i have to install/reinstall wine for several times. Therefore several libraries need to be download and installed into wine in order to run photoscape on linux by using winetricks and several settings need to be configure on wine configuration window.


Watch the short video


So after messing around with wine i finally made it to run photoscape on my Ubuntu machine and start the photo editing.

Photoscape is a fun and easy photo editing software that enables you to fix and enhance photos (photoscape).

In case anyone have the same problem as me, i wrote the following steps which could help you out.

Before walking through the detail steps of wine configuration for photoscape, first make sure you  have wine installed in your Ubuntu. Otherwise install it with Synaptic Package Manager or directly use apt-get on console window to install Wine.

me@pc:~$sudo apt-get update
me@pc:~$sudo apt-get install wine

To install photoscape into Ubuntu 11.10, download setup file from their official website and double click the setup file to start photoscape installation.

Assume you already have Wine and Photoscape installed on your Ubuntu;

1. Install MSI2 using winetricks:

me@pc:~$winetricks msi2

2. Run winecfg to open wine configuration windows:


3. Go to libraries tab and look for MSI from the Existing Overides lists

4. Click edit to edit it’s Load Order, and change it to “Builtin then Native

photoscape installation - ubuntu 11.10

UPDATE (read only permission error)

Many peoples getting problem where photoscape can not save image. There’s a permission error showed whenever they try to save image. The solution is by installing native gdiplus and set the library Load Order for the gdiplus to “Native then Builtin” (details are on steps 6-8, tested on Ubuntu 11.10 and Linux Mint Release 13 (Maya) 64 bit).

6. Install native gdiplus with winetricks:

me@pc:~$winetricks gdiplus

7. Open wine configuration (see number 2), click Libraries tab and find gdiplus on New override for library drop down menu and click add.

GDIPLUS winecfg - add to override

8. Click gdiplus from Existing overrides and click edit. Nest set gdiplus Load Order to  ”Native then Builtin

set load order to Native then Builtin

9. Now, install Visual C++ 2008 Libs with winetricks:

me@pc:~$winetricks vcrun2008

10. Run photoscape and have fun

Install SPICE on Ubuntu 11.10

Thanks to the efforts of many volunteers it is now possible to install SPICE on K/Ubuntu 11.10. In this document I mention the dramatically improved video playback performance and the stereo audio playback over LAN but these are only some of the features of SPICE. You might want to take a look at the SPICE web site to find out more about it.

The necessary binaries are being compiled for Ubuntu by Boris Derzhavets. Of course Mr. Derzhavets is relying on the efforts of other volunteers and I would like to thank them all. I would also like to thank the folks at Redhat for all their work in the Open Source community.

Take the following steps to install and use SPICE with your KVM VM's:

  1. At the time of this writing (December 30th, 2011,) the main K/Ubuntu 11.10 repositories do not include the SPICE packages compiled by Boris Derzhavets. Until that happens you can add the repositories of Mr. Derzhavets to your software sources to pull his binaries into your system. Hopefully this step will not be necessary for much longer.
  2. Modify your VM's to use the SPICE-specific virtual hardware.
  3. Install any necessary drivers on the VM's so that they can make use of the new virtual hardware.
  4. Install the SPICE client on your network workstations so that you can remote-desktop using SPICE to your upgraded VM's.

Once this is done you will be able to enjoy high-performance video and audio playback between your workstation and your Virtual Machine over your office network. The performance is not absolutely perfect but it is a major improvement over VNC, RDP and NX.

Let's get started:

Pulling SPICE support into K/Ubuntu 11.10

Please note that the steps listed here are strictly for K/Ubuntu 11.10. K/Ubuntu will most likely incorporate SPICE support in the K/Ubuntu Proposed repository before Q-4 of 2012.

If you are reading this after mid-2012 it is likely that you already have SPICE support in your system. If you are using a version of K/Ubuntu prior to 11.10 you will want to read carefully Mr. Derzhavets's notes to determine which packages to pull from his various repo's. The following notes have only been tested on K/Ubuntu 11.10.

Start by adding two private repositories maintained by Mr. Boris Derzhavets to your software sources.

Detailed notes from Mr. Derzhavets about the packages in these repos are available at

Here are some of the ways you can add Software Sources in K/Ubuntu:

  • In the new Muon Package Manager click on the Settings menu, then select Configurure Software Sources.
  • In Synaptic click on the Settings menu and select Repositories.
  • From the main KUbuntu Application Launcher (sometimes called the Start button,) click on the Settings menu and select Software Sources.

Once you have found the Software Sources dialog click on the Other Sources tab. Now press the ADD button at the bottom left of the dialog to add another software source. This will bring-up the Add Source dialog as shown in the image on the right. Copy and paste the following text into the APT Line box of that dialog:

deb oneiric main

Click the Add Source button and you will be returned to the Other Sources tab of the Software Sources dialog.

Once again, click on the ADD button at the bottom left of the dialog so that you can add the second PPA:

deb oneiric main

Click the ADD Source button to complete the add operation and click the close button on the Software Sources dialog. Next, click the Reload button (or equivalent for your package manager,) to reload all the package lists from the various repositories.

At this point you will probably get an error message related to missing keys. This is because GPG does not yet know about the signing key that was used by Mr. Derzhavets when he packaged his binaries. You can double-click on the key signature in the error message window. This will cause the signature to be highlighted as in the image on the right. You can then right-click over the highlighted text to bring-up the speed menu. Select Copy to copy the signature to the clipboard. Next, open a shell and type the following command:

$ sudo apt-key adv --keyserver --recv-keys 5CC1785DC05C1EB5

Don't forget to substitute the key signature that you copied for the key signature shown above (in case they are different.) The apt-key utility will attempt to pull a copy of Mr. Derzhavets public key from a key server. This will allow the package manager to verify the signatures on the binaries packaged by Mr. Derzhavets.

At this point it should be possible to start pulling the SPICE compatible binaries into your system. Use your favorite package manager to install the desired packages. (The command line below uses the apt-get command but any package manager will do.) Note that the following list is from my main workstation and might include some packages that are not strictly necessary:

$ sudo apt-get install libcelt051 libspice-client libspice-server libspicegtk3-client qemu-common qemu-kvm spice spice-vdagent spice-gtk3-client

If you run into any package problems you can un-install the problem packages and then re-install the desired package. I ran into a problem with a loadable object module that was being provided by two different packages. The easy way to solve the problem was to un-install the two packages that contained copies of the same file. Then I installed the desired package again without any difficulty.

Hopefully by the time you read this the packages will have been moved into the proposed repository and any package issues will have been resolved.

Restart KVM and Virt-Manager

At this point the desired software is on the workstation filesystem but the previous copies of the software are still loaded in the system. To correct this problem:

  • Exit the Virt Manager
  • Open a shell.
  • Stop and restart the libvirt-bin and qemu-kvm services. There are two different ways to do this. I prefer the service command but, if you don't have it installed, you will want to just call the init scripts manually:
    • Using the service command:

      $ sudo service libvirt-bin stop
      $ sudo service qemu-kvm stop
      $ sudo service qemu-kvm start
      $ sudo service libvirt-bin start

    • Or, if you have not installed the service command, you can accomplish the same using the init scripts:

      $ sudo /etc/init.d/libvirt-bin stop
      $ sudo /etc/init.d/qemu-kvm stop
      $ sudo /etc/init.d/qemu-kvm start
      $ sudo /etc/init.d/libvirt-bin start

  • Finally: restart the virt-manager.

The first time that I ran through the above procedure I discovered that I had missed a few details and got some error messages. Thankfully the messages were informative enough that I was able to quickly work through them. Hopefully you will have a similar experience and the above process will proceed smoothly for you, too. Of course if you are reading this after mid-2012 there is a very real chance that the desired software will be included in the proposed software repository and you won't have to think about the packages at all.

Downloading any necessary drivers

Most likely you already have the drivers you will need for any modern Linux based Virtual Machine. The QXL video driver package is already part of the XORG collection in both the Fedora and Ubuntu repositories. As such one might hope to find it in other distributions of Linux. Drivers for the AC97 audio device have been available in Linux for quite some time.

In practice this means that you will probably be able to boot your modern Linux Virtual Machines and get the correct drivers installed and running with little or no effort. If your Virtual Machine is running an older version of Linux you will probably want to continue to connect using VNC and/or NX. On the other hand you might find, as I did, that the basic VGA or VESA video devices work fairly well with the SPICE backend.

Windows Virtual Machines will detect the QXL video device. When you first boot the VM the Found New Hardware Wizard will appear. There is no doubt that video performance will improve dramatically if you direct the wizard to install the Windows QXL Virtual Device Drivers.

As such if you expect to try using SPICE on a Windows VM you will first want to visit the SPICE project web site and download the most recent driver files. Copy them to a convenient Samba share or create an ISO file that you can connect to the VM's as a CD-ROM.

The desired files can be found on the download page of the SPICE web site:

Look for the Windows Binaries about three-quarters of the way down the page. Remember to unpack the files and save them to a network-accessible samba share or use a program like K3b to copy them into an ISO file that can be connected as a CD-ROM to a VM.

Modifying a VM's Hardware Configuration

In the past I configured my KVM Virtual Machines to use a virtual video card called VMVGA. I then used VNC to display the image that was written to the virtual VMVGA device. I also found that the ES1370 virtual sound device gave me good results through the pulse audio setup on my workstation.

When I started trying to get SPICE to work I found it best to change the virtual hardware. The VMVGA device does work with SPICE but the results are dramatically better with the SPICE-specific QXL device. Also I found that there was no sound over SPICE unless I switched to the AC97 virtual sound device. In fact the sound might work with other virtual hardware - I don't know yet if the virtual AC97 device is somehow tied to the current SPICE software.

For sure it is always necessary to remove the VNC (or SDL) display backend and install the new SPICE backend.

If you are using the Virt Manager as shown in the images here you will find that editing your virtual hardware setup involves only a few seconds of effort.

  1. Open the Virtual Machine in the Virt Manager and select the Details View as shown in the image to the right.
  2. Click on the Video item in the left panel of the Details View. Select the SPICE compatible virtual video device which is called QXL. This is the device that your VM will see when you boot it. Don't worry - the QXL device is VESA compatible. If your VM is running a relatively modern operating system it will boot and you will be able to install the QXL drivers.

  3. Now that we have enabled the QXL virtual device we need to set the backend to use the SPICE protocol. Click on the DISPLAY item in the left panel of the Details view. You will see the currently configured backend. Most likely this will be VNC but it might also be SDL. In either case click the [Remove] Remove Button Graphic button to delete this backend. (Scroll down the page to continue.)

  4. Now click on the [Add Hardware] Add Hardware button to add the SPICE protocol Display backend. Select the Graphics item in the left panel. Set the Type of Graphics Backend to SPICE Server.

    If you are on a private network and plan on accessing this Virtual Display over your network you might want to check the Listen on all public network interfaces option. Set the port numbers as desired or check the Automatically Allocated option to let the system allocate the next available port at run time. Click Finish to save your changes.

    Your Display Spice / Spice Server backend should look something like the image on the left when you are done.

    At this point you have selected the QXL virtual video card and the SPICE display backend for your Virtual Machine.

    Next we will work on the Sound configuration. Please scroll down the page to see the next step.


  5. Next: Click on the Sound device in the left panel of the Detail View for your Virtual Machine. It is possible that the device you have currently selected will work fine. For me, on my workstation, I was only able to get sound out of my VM's when I selected the AC97 device. This may not be an issue for you. You can change the setting to AC97 or leave it alone. If the sound doesn't work properly for you it's easy to change again later.

At this point the first step, configuring your VM to use SPICE, is complete. Next you will boot your VM and install any required virtual device drivers. Finally you will install the Spice client and remote-desktop into your VM's using the SPICE protocol.

Booting a Windows VM

For those who skim quickly through these documents I mention again that your modern Linux VM's will probably boot and run very nicely with little or no effort on your part. This should be true, at least, for your Fedora and Ubuntu derived distributions. It may also be true of other distros as the open source code provided by Redhat is readily available.

Windows VM's, on the other hand, will boot and detect the new hardware but not find the required drivers for that hardware. The Found New Hardware Wizard will appear and you will want to direct it to the Virtual Device Drivers that you previously downloaded. See the start of this document for details on where to find the drivers. Be prepared to find the drivers. One way is to copy them to a samba share on your network. Another is to copy them into an ISO file and connect that ISO file as a CD-ROM on the VM.

At this point you can boot the VM and log into an Administrator account. After a short delay the Found New Hardware Wizard will appear. The first page of the Wizard will ask if you want to use Windows Update to find the desired drivers. Select the No, not this time option. Click the Next button to move to the next page.

The second page will ask you where it should look for the necessary driver files. Select Install from a List or Specific Location (Advanced) so that you can specify where you want the Wizard to look for the drivers.

The third page of the Wizard allows you to specify the locations that the Wizard will search to find the QXL video drivers. Choose the appropriate options and browse as needed to find the appropriate files for your version of Windows and your architecture. The snapshot to the right of this text was taken on a 32-bit Windows XP VM with the drivers on a NAS share.

Windows will warn you that the driver you are installing has not been certified by Microsoft. In practice it is true that buggy drivers are the cause of many troubles. I hope that you will install these drivers and take the time to report any problems that you trace back to them. Over time they will, no doubt, develop into the high-performance and high-reliability software that we all need.

In my case I found the installation process was relatively quick and painless.

According to this Redhat Guide from December of 2009 the Found New Hardware Wizard might appear again. This time it will be asking for your help to find and install VDI Port Drivers. I did not get this on my Windows VM's so I'm guessing that the VDI Port Drivers are now included with the QXL drivers.

One last item to install is the Spice Agent. The SPICE for Newbies [pdf] document describes the agent as "an optional component for enhancing user experience and performing guest-oriented tasks. For example, the agent injects mouse position and state to the guest when using client mouse mode. In addition, it is used for configuration of the guest display settings. Future features include copying and pasting objects from/to the guest. The Windows agent consistss of a system service and a user process."

You can download the agent software from the same download page on which you found the Windows drivers. See Install the SPICE agent in this Redhat Guide for details on how to do this.

Connecting to your VM via SPICE

Finally we can get to the last step in this process: Connecting to the VM through a SPICE client.

There are two clients to choose from: The Redhat GTK+ client (called spicy, note the missing 'e',) and the more basic Spice Client (called spicec.) I have played with both of them and prefer the many features of the Redhat client - but it is slower than the more simple Spice Client.

The first thing to do is to install one or both of the clients.

NOTE: At the time of this writing (December 30th, 2011,) there are two spice client packages containing spicec, the simple spice client. One of them is called spice and the other is called spice-client. Do not install the second one as it depends on a package that conflicts with a codec package. Install only the spice package to avoid this conflict.

If you are installing one or both SPICE clients on a different workstion you can follow again the notes in the section Pulling SPICE support into K/Ubuntu 11.10 above. You can also download a SPICE client for Windows from the SPICE web site. Of course the Redhat repositories contain SPICE clients for the Fedora distributions of Linux.

If you configured your VM to automatically assign an available port number you will need to find out what number was assigned. Simply boot the VM and check the port number in Virt-Manager's Detail View for the VM. See the sample image to the right of this text. The port number is the highlighted text in the configuration for the Display backend for the VM.

At this point you can try the two clients. Each client wants a host name and port number. Clearly the Redhat client has more features but, in my personal case, I was using SPICE mostly because I wanted to watch Netflix at night. For this use-case the more basic Spice Client demonstrates better performance.

If you check your Pulse Audio volume control after connecting with your VM you may not see the Spice Client as one of the sound sources. This is because the client will not connect to the pulse subsystem until a sound is sent from the VM. On Windows VM's you can click on the volume control in the task bar to cause the VM to emit a ding sound. After that you should see a Spice Audio source as at the bottom of the image to the right of this text. If you do not hear any sound you can shutdown your VM and try a different virtual sound device.

Intel distributes open source LibreOffice

Even though Inter and Microsoft a physically across the street from each other, the Document Foundation (TDF) has issued an official statement confirming that LibreOffice for Windows from SUSE is now available on the Intel AppUpSM Center.

Formed to "fork" the OpenOffice productivity applications suite away from its originators now that the project sits under Oracle, LibreOffice is a free to download open source "Office" suite of apps largely compatible with Micrsoft technologies.

A new online repository containing the LibreOffice download has been set up with specific compatibility for Intel processor-drive devices, particularly the company's Ultrabook high-end sub notebooks.

Read More - Click Here!

Is Chrome OS Linux

Where did Chrome OS come from? Originally, it seems to have started with Ubuntu Linux. Chrome OS was released in November 2009 and the news quickly came out that Canonical, Ubuntu's parent company, had helped build Chrome OS.

In a Canonical blog posting, Chris Kenyon, then Canonical's VP of OEM Services, wrote, "Canonical is contributing engineering to Google under contract. In our discussions, Sundar Pichai [Google's senior vice president of Chrome] and Linus Upson [Google's VP of engineering for Chrome] made it clear that they want, wherever feasible, to build on existing components and tools from the open-source community without unnecessary re-invention."

So, Chrome OS today is based on Ubuntu? Well, no... it's not. The first builds of Chrome OS had Ubuntu as its foundation, but it's changed over the years. In February 2010, Chrome OS started switching its foundation Linux distribution from Ubuntu to the older, and more obscure, Gentoo Linux.

This was done, as recorded in a Chromium OS developer e-mail list discussion, because  "the need to support board specific builds and improve our tools has become more urgent. In order to get there more quickly we’ve been investigating several different build tools. We found that the Portage build tools suit our needs well and we will be transitioning 100% within the next week."

Portage is Gentoo's package management system. It's most noteworthy because, instead of using prepared program packages, such as those used in Red Hat's RPM or Debian's DEB, for installing software, it compiles programs directly from source code.

That's not the end of the story though. While Gentoo's Portage is still used for package management in Chrome OS, sources say that today's Chrome OS "kernel is a regular upstream kernel plus our own changes. We don't pick up anything from Gentoo in that area." So, today's Chrome OS is based on Google's own take on the vanilla Linux kernel while Portage is still used for software package management.

No matter how exactly Chrome OS got its start, today it's becoming a popular Linux distribution. While it's most often found pre-installed on Chromebooks, Chrome OS can also be installed on PCs.

Read More - Click Here!

Is Drupal a Competent Web Application Development Framework

Team lead at Mindfire Solutions asked the question, "Is it advisable to use Drupal as Web Application Development Framework". Thus far we have only used Drupal for web design. But as a Web Application Development Framework? Let's see what the experts say...


  • Alan Rodriguez


    Alan Rodriguez

    Owner, Crumpeta Consulting, LLC

    True, Drupal is not an MVC framework in the strictest sense but it is a very competent Web Application framework. I've not encountered a scenario yet where the typical web application requirement cannot be reasonably mapped into a Drupal site. If you're looking for developing strict MVC code, then you will be missing out on the popular and usual ways to get things done with Drupal - ways which nullify the benefits of using Drupal.

    Debasis S. likes this

  • Joel Wallis Jucá

    Joel Wallis

    Joel Wallis Jucá

    Software Developer, Drupal DevOps

    With Drupal you'll stop writing a bunch of controller classes, drivers, and other technical stuffs to focus on your business model. The most important part of Drupal is the community, not the software.

    Access and you'll se: "Come for the software, stay for the community." And that's all about Drupal: an extremely strong community of high technical developers, businesses, big companies and small agencies, DevOps and designers, all of them collaborating to build something that puts quality first.

    Technically speaking, you gain some base concepts to develop your applications. With modules like Views and Panels you are able to build awesome pages, arrange them in the way you need, or give that power to your users too. And there's a powerful module system that allow developers interact and improve practical every aspect of the software with pluggable software. The result? A really professional code organization that gives you the power of an organized extensibility.

    If you're new to Drupal you may be confused about these aspects, it's not so easy to find it on other software communities. The best way to check if Drupal is good for your projects is by reading a lot about Drupal, and of course about open source markets e business models.

    Collaboration instead of competition. That's why Drupal rocks! :-)

    Debasis S. likes this

  • Debasis Sabat


    Debasis Sabat

    Team lead at Mindfire Solutions

    Thank you Alan and Joel for your valuable review points. I want to add few more points which give us more confidence how we can use Drupal towards building web application development framework.

    - First of all Drupal is a event driven(hooks) CMS system with plugable architecture.
    - We can use Drupal's powerful feature like User Management, Sessions, and Email to build any kind of complex user based applications.

  • Dorian Marchand


    Dorian Marchand

    Directeur associé chez Kernel 42


    For me, Drupal is clearly a good choice for application development.

    We use it to build complex business application here (in France) for some of the biggest French companies and institutions, totaly without site building (only code and framework).

    Debasis S. likes this

  • Timothy Joko-Veltman


    Timothy Joko-Veltman

    Senior Drupal Developer

    My view is that in fact, Drupal is not really a CMS at all, but a web application framework in which content has a special importance. No, it is not true MVC, but it is not MVC that makes a framework a framework. My experience seems to support this, as I have used Drupal to do many things from web services providers and clients to reporting, and data import/export - things only indirectly related to content management.

    Debasis S. likes this

  • Sofia Vargas Koch


    Sofia Vargas Koch

    Information Architecture & Management

    We build a series of complex web based applications and for us Drupal has proven to be the right choice. Time and money are always the limit to development, Drupal gives you the tools and the speed to build complex structures and present it in a user-friendly way.

    Debasis S. likes this

  • Matt Ryan


    Matt Ryan

    An effective manager with excellent computing skills and a passion for outdoor activities

    We have built quite a few complex applications with Drupal as a backend and I'd definitely recommend it. Drupal is not just about content management. User management, a security model, database abstraction, forms processing, a theme engine, and much much more. Until fairly recently I had a couple of production sites that didn't have a single node or entity.

    Debasis S. likes this

  • Franco Ambrosi


    Franco Ambrosi

    Information and Communication Technology Expert

    Don't forget the ability to connect external xml datasets in Views. This expand flexibility and interoperability, vital keys in web apps development.

    Debasis S., Sofia V. like this

  • Debasis Sabat


    Debasis Sabat

    Team lead at Mindfire Solutions

    @Timothy i must say data sanitation is one other powerful features which Drupal is providing and we can use for saving and retrieving data which will make sure that malicious information isn't being inserted into the database.

    Please share your views how Drupal can be a potential foundation for application development...

Is FREE LibreOffice a Better Microsoft Office Alternative

(Gavin Phillips@ MakeUseOf) Long-time Microsoft Office challenger LibreOffice just received a makeover, amongst other interesting updates, in its 5.2 update package. LibreOffice is regarded as a serious contender to the Office crown of productivity suite king, but has been held back over the years. Niggling bugs, and a somewhat clunky UI have been long time complaints, as have import and export formatting issues.

Has LibreOffice finally found the winning formula? And will it be enough to convert this life-long Office user?

What’s New?

Let’s start with a quick rundown of LibreOffice 5.2 new features:

  • Almost a complete UI overhaul. Menus, toolbars, buttons, rulers, tabs, and more receive an update making the LibreOffice UI much more aesthetically pleasing and easier to navigate.
  • Introduction of OpenGL for presentations. 3D accelerated slide transitions and more come to Windows.
  • Track changes and review now work, jumping from one comment to the next on completion.
  • Increased compatibility with Office “C-Fonts” such as Calibri and Cambria. LibreOffice ships with open-source fonts with equivalent proportions.
  • Improved “Start Centre”, with additional user templates added from the LibreOffice community.
  • Improved source code via Coverity Scan analysis.

Let’s Take a Closer Look…

On first impressions, LibreOffice really has made ground on Microsoft Office. The UIis nice. It loads notably quicker than previous version, 4.4, which I was playing with last week for an upcoming Excel alternatives article. Developers, The Document Foundation, believe LibreOffice 5.2 is “is the most beautiful ever” having received “a lot of UX and design love.”

LibreOffice Calc

The properties, styles and tabs sidebar has received a little makeover, too. I’ve always liked having this selection of formatting tools to the right of my work, and LibreOffice offers this in their native setup, across Writer, Calc, Impress and Base. +1 for LibreOffice. Maybe another +.5 for the colour on my screen.

LibreOffice Sidebar

I’m not convinced it’s the most beautiful application ever, but it’s looking good.

Tracking Changes and Formatting Updates

Tracking your editorial changes and commenting now work properly, as each time you accept or reject the editorial note it moves directly the next in queue. Seeing the small bugs like this finally being erased from LibreOffice illustrate the desire to gain parity with Office. I can see this small update winning LibreOffice users. It has been a genuine frustration receiving documents from colleagues using .ODF files, only for Office, or any other software suite to break everything.


Importing into and out of LibreOffice has become relatively seamless. Compatibility with Office is a must, and the developers have recognised this. Documents saved with comments, editing and formatting in LibreOffice export to Office, and import just as well.

LibreOffice’s inclusion of open-source fonts Carlito and Caladea certainly aid the process, making the import of Microsoft Office Open XML (OOXML) that bit faster, with more, if not all of your formatting escaping modification. Most of the niggling .docx import import issues have also dissipated with this 5.2 update.

Start Centre and Templates

The Start Centre offers more drop-downs and functionality than previous iterations. Having all recently associated documents centred in the Start Centre is a nice touch. However, the lack of native templates is slightly disappointing, and for those users potentially making the switch from Office, this could be a turn-off.

Libre Office Templates

I know that there are a massive amount of templates available for download, but Office really does excel with the convenience there: tap what you’re after into the search box, and you usually find a functional, well-designed template for instant download. Perhaps later versions will see this feature further implemented.

OpenGL Presentations

3D Accelerated presentations come to Windows, having already featured on OSX and Linux for some-time. Let’s face it. Slideshow transitions stopped being an amazingly fun tool when most of us were teenagers, but the move to include a feature that has been commonplace in OSX and Linux will undoubtedly please those PowerPoint and LibreOffice Impress users.

Coverity Scan Analysis

As we can see in the image, the Coverity Scan Analysis metrics returned some 12, 354 defects in the current code. Following the scan, nearly 12,000 of these defects have been fixed, delivering you a more compact, safer, reliable Office package. If the code isn’t working, your application wont work. It stands to reason. LibreOffice are making great progress by eliminating the small issues, before they become big problems.

LibreOffice Analysis

LibreOffice vs Microsoft Office

There are countless Office alternative articles extolling the virtues of LibreOffice over Microsoft Office, but this article isn’t one of them.

Yes, The Document Foundation has upped its game with LibreOffice 5.2 and yes, it is quite pretty all round. Even better yet, its completely free, and if that is something you need from your software, then I would absolutely advise you to download and use it.


However, it still cannot compete with Microsoft Office across the board. I may be biased. I might. But Word does almost everything right for me. The top-menu, and right-hand properties and formatting tab is a bonus, but I can rearrange Word to this end. Excel still packs a powerful punch that most other spreadsheet applications struggle to get close too, but Calc is a strong second, and I can see why so many Linux distros use LibreOffice as their default Office package.

It is better. It’s not the winner

Is Windows 8 a Linux Copycat?

Is Windows 8 a Linux Copycat?

"M$ does what it always has done, and that is borrow other people's ideas for software," asserted blogger Robert Pogson. "There's nothing wrong with that -- it is normal in the world of software ... ." What's wrong is when "M$ calls it innovating and applies for software patents on other people's ideas and sues people over them. ... May M$ rot in Hell for that."

Here in the world of technology, there's no denying that developers of even the most creative new products and ideas "stand on the shoulders of giants," just as innovators in most other realms do too.

New ideas inspire more new ideas over time, after all, so it's not surprising to see myriad commonalities and linkages among them.

Lately, however, that notion is being examined a little more closely than usual in light of recent revelations about Microsoft's (Nasdaq: MSFT) forthcoming Windows 8 and -- in particular -- how much it has in common with Linux.

Read More - Click Here!


It's Not Stealing - It's Open Source

Extra expense is something most businesses, new and established, can't afford in this economy. Fortunately, there's a new wave of top-notch software programs that not only do the job of the big, expensive titles, but do it for free. No, it's not stealing - it's open source.

Some open source programs may charge subscriptions for support, updates, documentation or premium versions, but most are completely usable without paying a dime. And the best part is that there are lots of titles to choose from. Virtually all of the major commercial software programs have some sort of open source counterpart. Here are a few of the top picks...

LibraOffice, also know as ooo, is a full-featured suite of office tools, including word processing, spreadsheets, presentations and databases. Best of all, it's compatible with Microsoft Office®, so you won't miss a beat when sharing documents with others. It also supports the OpenDocument Format, a new and emerging standard that has the ability to turn your documents into PDF format right from the program - something most office suites cannot do.

It's completely free, available for all main-stream operating systems and can be used for any purpose: personal, private or commercial. You can install the same copy on as many computers as you want and give away unlimited copies to your friends. It is the pinnacle of Open Source software - and absolutely worth a try for any business looking to cut costs. Download available at

GIMP (GNU Image Manipulation Program) is a full-fledged graphics creation and photo editing/retouching program that rivals the industry standard, Adobe Photoshop®. It can be used for everything from simple \"paint\" type tasks to expert-level photo retouching and image conversion.

Like, it's completely free and available on most operating systems. Download available at

Other Options: There are dozens, if not hundreds, of free open source programs that are equal to - if not sometimes better - than their commercial counterparts. A few other examples are:

Mozilla Thunderbird®: A world-class email client similar to Outlook® or Lotus Notes®. Download available at

Scribus: Desktop publishing program similar to QuarkXpress or Adobe InDesign. Download available at

Inkscape®: Vector graphics editor similar to Adobe Illustrator®or Corel Draw®. Download available at

Your business needs every advantage it can get in this current economic environment. Be sure to check out these software solutions and see if they work for you and your business.

KODI Sync Over Multiple divices

Kodi is still one of the most powerful media center applications around, and it works on everything from powerful media PCs to small Raspberry Pis. But if you have multiple TVs in your house, wouldn’t it be nice if they all stayed in sync?

By default, if you have multiple Kodi machines, they won’t recognize each other. Episodes you watched on one TV won’t show as “watched” on another. Wouldn’t it be nice, though, if your bedroom Kodi box knew what you watched in the living room, and vice-versa? Would it be nice if you could stop watching a movie in the living room, and resume watching right where you left off somewhere else in the house?

Well, it’s possible—it just takes a bit of setup. Here’s how to do it.

What You’ll Need

The core of the synchronization magic we’re about to undertake is a MySQL database. Don’t panic if you’ve never used one before! It does require a little technical know-how, but we’re here to guide you every step of the way. If you follow along closely, you shouldn’t have any problems.

What we’re going to do is install a free version of MySQL server, then instruct all your Kodi machines to use a database on that server as its library (instead of a separate database on each individual computer). From that point forward, when Kodi checks to see if you’ve seen a specific TV show episode or movie, paused media, or set a bookmark, it won’t just be answering for the specific media center you’re standing in front of, but for all media centers in the house.

For this project, you’ll need the following:

  • More than one media center with Kodi installed (they’ll all need to be the same base version of Kodi—we’ll be using v17 “Krypton” in this guide).
  • A free copy of MySQL Community Server—the Kodi wiki recommends grabbing version 5.5 instead of the newer 5.7, so that’s what we’ll be using for this tutorial.
  • An always-on or nearly-always-on computer to run the MySQL server on.

You can install the MySQL server on any computer that will be consistently on while you’re using the media centers. In our case, we’re going to install MySQL on the same always-on home server that we store our movies and TV shows on—that way, any time the media is available to Kodi, so is the database.

Step One: Install the MySQL Server

For this tutorial, we’ll be installing MySQL on a media server running Windows 10. Our installation instructions should match for any version of Windows. For other operating systems, please consult the MySQL 5.5 Manual.

The installation of MySQL is straightforward. Simply download the server installation app and run it. Accept the license agreement and the “Typical” installation. When it’s finished, make sure “Launch the MySQL Instance Configuration Wizard” is checked, and click Finish.

The MySQL configuration wizard will launch and present you with the option to select between Detailed and Standard Configuration. Select Standard Configuration and click Next.

On the next screen, check “Install As Windows Service”, name it MySQL—or, if you’re running multiple MySQL servers for some purpose, give it a unique name—and check “Launch the MySQL Server Automatically” to ensure the MySQL server is always on when you need it.

On the next screen, check Modify Security Settings, plug in a new root password, and check Enable root access from remote machines.

Click through to the final screen and press Execute to let the wizard set everything up with the parameters you’ve specified. When it’s finished, move on to Step Two.

Step Two: Set Up Your MySQL User

Next, it’s time to create a user account on the MySQL server for your media centers. We’ll need a bit of command line work for this. To start, run the MySQL Command Line Client—you should have an entry for it in your Start Menu.

When the console opens, enter the password you created in the previous step. You’ll then find yourself at the MySQL server prompt.

At the prompt, type the following commands, pressing Enter after each one, to create a user on the database server:

GRANT ALL ON *.* TO 'kodi';
flush privileges;

The first portion of the first command creates the user, the second portion creates the password. While identical login/passwords are generally a huge security no-no in this case we’re comfortable using a matching pair for the sake of simplicity. A MySQL database, on a private server, that tracks which episodes of Dexter you’ve watched is hardly a high risk installation.

That’s all you need to do in the command line for now—though we recommend keep the command prompt open for the MySQL server, however, as we’re going to check in later and take a peek at the databases once Kodi has created them for us.

We have one final task before going to configure Kodi. Make sure that Port 3306 (the MySQL server port) is open on the firewall of the machine you’ve installed MySQL onto. By default, the Windows installer should open the port automatically, but we’ve seen situations in which it didn’t. The easiest way to open the port is with a PowerShell command. Search for PowerShell in your Start menu, then right-click on it and choose “Run as Administrator”.

Then, run the following command and press Enter:

New-NetFirewallRule -DisplayName "Allow inbound TCP Port 3306 for MySQL" -Direction inbound –LocalPort 3306 -Protocol TCP -Action Allow

If the command was successful, as shown below, you should be good to continue.

Step Three: Back Up Your Current Kodi Library (Optional)

RELATED: How to Store Your Kodi Artwork in the Same Folder as Your Videos

By default, Kodi uses an internal SQLite database. In order for Kodi to communicate effectively across your home network, we need to instruct it to use an external MySQL database. Before we get to that step, however, you’ll need to make an executive decision: you can either back up your current library and restore it later (which can sometimes be finicky), or you can start fresh with a new library (which is easy but will require you to re-set the watched state on your shows, and possibly re-choose your artwork if you don’t store it locally).

If you want to back up your current library, you can do so from within Kodi. Only do this from one machine—choose the machine with the most up to date libraries. Open Kodi and head to Settings > Media Settings > Export Library. (If you don’t see these options, make sure your menus are set to “Advanced” or “Expert” in Kodi.)

You can export your library as a single file or as separate files. A single file will allow you to put your backup in one place, while multiple files will scatter extra JPG and NFO files into your media folders—this is more reliable, but quite cluttered. Choose whichever option you want.

Once your library is backed up, continue to the next step.

Step Four: Configure Kodi to Use Your New MySQL Server

Once you’ve backed up the library (or opted to not worry about it and start from scratch), you’re ready to point Kodi to your MySQL server. You’ll need to perform this step on every machine running Kodi, but we recommend setting it up on one machine first—probably the same machine you backed up your library from, if you chose to do so.

In order to point Kodi to MySQL, we need to edit Kodi’s advancedsettings.xml file. By default this file does not exist (although it is possible that, during the installation process, Kodi created one for you to deal with specific configuration issues). If the advancedsettings.xml file exists, it will be in the following location, based on your OS:

  • Windows: C:\Users\[username]\AppData\Roaming\Kodi\userdata
  • Linux and other Live versions of Kodi: $HOME/.kodi/userdata
  • macOS: /Users/[username]/Library/Application Support/Kodi/userdata

Check in that folder. Is there an advancedsettings.xml file there? Yes? Open it up. No? You’ll need to open a text editor and create one. Regardless of whether you’re editing the existing one or create a new one, cut and paste the following text into the file (note: if there are already some entries in your advancedsettings.xml file, leave those in place and put these values within the correct sections):



Edit the above text to reflect the IP address of your server on your LAN and the username/password of your MySQL database (in our example, it was just kodi/kodi). This basic setup should get your video and music libraries synced, but you can also sync other portions of Kodi, as well as sync multiple profiles with the name tag if you use them.

Once your advancedsettings.xml file is ready to go, open Kodi on that machine. You’ll need to either import your library (from Settings > Media Settings > Import Library), or rescan your sources to begin populating the MySQL database from scratch. Do that now.

When that’s done and your library is back in place, you can hop over to your MySQL command prompt and check to make sure Kodi created and populated the databases. At the mySQL comment prompt, run:


It will output all the databases currently on the MySQL server. You should see, at minimum, at least the following databases: information_schema , mysql , and performance_scheme , as these are part of the MySQL installation itself. The default database names for Kodi are myvideos107 and mymusic60 (we’re not using a database for music in our example, so only our video database is appearing in the list).

If you ever need to remove a database from your MySQL server, you can use the following command:

DROP DATABASE databasename;

Empty databases take up hardly any space, and won’t negatively impact the performance of your syncing system, but it’s nice to keep things tidy.

If your databases are there, that’s a good start, but it’s worth performing a simple check to see if Kodi is properly populating the databases. From the MySQL command prompt run the following commands (replacing databasename  with the name of your video database):

SELECT COUNT(*) from databasename.tvshow;

Each query will return the total number of movies and television shows, respectively, contained in your library (according to the the MySQL database). As you can see, in our case, it is recognizing our library with 182 movies and 43 TV shows:

If the number of entries is zero, there is a problem somewhere along the line. Here’s is a quick troubleshooting checklist of common mistakes:

  • Did you copy the advancedsettings.xml file to your machine before you started Kodi and re-populated your library?
  • Did you use the GRANT ALL command to give the Kodi account access to the MySQL server?
  • Did you open port 3306 on the MySQL host machine’s firewall?
  • Are your sources valid and scannable when you remove the advancedsettings.xml file and revert to the local database? If not, you’ll need to troubleshoot your sources independently of your MySQL problems.

If everything looks good and your SELECT COUNT query pans out, that means you’re ready to start taking advantage of the cross-media-center syncing.

Step Five: Repeat Step Four for Your Other Kodi Machines

The hard part is over! Now you just need to go to each of your other Kodi machines and place the same text in the advancedsettings.xml file that you did in step four. Once you do so (and restart Kodi on that machine), it should immediately grab your library information from the MySQL server (instead of you needing to re-populate the library yourself).

On some devices, like Raspberry Pis running LibreELEC, you’ll need to go into the Network settings and make sure “Wait for network before starting Kodi” is turned on for this to work properly.

In addition, if your videos are on a share that requires a password, and you get an error after setting up your advancedsettings.xml on a new machine, you may have to go to the “Files” view, click “Add Videos”, and access a folder on the share so Kodi prompts you for your credentials. You can then click “Cancel” or add the source as containing “None” type of media.


From there, try watching a video on one box. You should find that when you’re finished, it shows as “watched” on your other Kodi devices as well! You can even stop a video on one machine, then pick up where you left off just by selecting it to play on another machine. Enjoy your new whole-house library syncing!

Kohana Framework 3 Tutorial part 1 – installation and setup

Kohana Framework 3 Tutorial part 1 – installation and setup


Kohana is an elegant HMVC PHP5 framework that provides a rich set of components for building web applications.” (

I am a big fan of web application framework as they really do cut down on some of the development time. However they can be quite challenging to the newcomer. A few years ago I first started with CakePHP and then quickly moved to CodeIgnigter, which has been my framework of choice until I found Kohana. It is by far the best framework out there, not only because it is pure PHP5, but also because it offers an unprecedented amount of flexibility to the user.
Anyway lets not talk about the theory but rather about this tutorial. Because Kohana’s documentation is pretty bad, nonexistent for version 3 I though I would do a few tutorial series here. I will try to build an entire web application (nothing too fancy, more on that later) and document every step here. I am not a PHP expert, nor a Kohana expert, but I hope this will provide beginners with a bit of a head-start!
The application: since I couldn’t come up with a better idea I thought I would just recreate Twitter. I personally like the way Twitter is built (especially the sexy URLs) so I thought this would be a good example application.
So lets start with Part 1:

We will cover the basic installation of the framework and configuration of you environment (sounds complicated, buts it really easy).
So first you will have to download the newest version of the framework from (I am going to be using v3.0.4.2 for this tutorial). You should have a file called “”. Unzip this file and copy its contents into the root directory of your application. This is the directory that your server will look into when accessed through your application’s domain (in this tutorial that will be http://localhost/, but for our real application this would be
Here is a screenshot of what I have in my web application’s root directory:
Just so you get a quick idea: the “application” folder is where we will be building out Twitter application, “modules” contains helper classes (this will be explained in more detail in a future part as they are not of too much interest to us now) and “system” contains the Kohana framework code (feel free to check it out, but make sure you know what you are doing before you start changing things in there).
So…let’s get started. If you now point your browser so http://localhost/ you will get the Kohana “Environment Tests” page. It displays some information about your PHP version, you directories, etc… My test never passes at first because of two things and I guess this will be a problem for many people. Here is the screenshot:
Two problems: the cache and the logs directory are not writable. The cache directory is where Kohana will save cache data on the fly (if you don’t want to cache anything I guess this will not be a problem) and the logs directory is where Kohana will produce your log files. These are very useful for debugging during development (although most errors will throw exceptions and print a stack trace in the browser), but these are even more important in a production environment, so that you can check your custom log messages (yes, you can print messages to the log in your application manually) and any other system errors. When in production we will turn off the in-browser-error-reporting as we do not want the user to be confronted with ugly error messages, but we would still like to be able to check from time to time and see what kind of errors the users are getting.

How do we fix this?

I use Mac and Linux, so I will give you the solution for these systems here, unfortunately Windows users will have to play around themselves to fix this, but it should not be too hard (change the permissions of these two folders so that the web-server-user has read/write rights on them). For Mac/Linux, just enter these two commands in the terminal:
chmod -R 777 /Code/Twitter/application/logs/
chmod -R 777 /Code/Twitter/application/cache/
You will obviously have to change the directories to the ones in your configuration and depending on the ownership of these folders you may have to place “sudo” in front of these commands (makes you root for the following command) and enter your password.
Finally reload the page at http://localhost/ and you should (hopefully) see something like this:
That looks good! If you also pass the environment tests (or want to ignore the cache and log problems) it is now time to rename the install.php file (or delete it if you want).

In the next part of this tutorial I will configure the bootstrap.php file and introduce the basic application flow!

Kohana Framework 3 Tutorial part 2 – quick introduction to controllers and the framework

Kohana Framework 3 Tutorial part 2 – quick introduction to controllers and the framework


Before building the actual Twitter application I think it would be useful to go over the way the framework works by default. So right now we have a clean and working install. Let’s see how it works.

If you go to “http://localhost/” or “http://localhost/index.php” you should see a page that just says “hello, world!”. That is our default/index page. So what happens if you go to “http://localhost/index.php/hello/“? You will get an ugly exception that looks like this:


It seems like Kohana cannot find the “Hello” controller (controller_hello). What about if you go to “http://localhost/index.php/welcome“? Strange…we get the default/index page. So it seems like the “http://localhost/index.php/welcome” is actually the same as “http://localhost“. This has something to do with your default route. We will go more into routing in the next part of the tutorial. At least we have a working controller to play with (controller_welcome). So let’s try this: “http://localhost/index.php/welcome/hello“, what happens? We get another ugly exception:
Here Kohana is complaining about some method that does not exist. Why? Well now may be the time to explain how routing usually works in MVC applications. When you request the page “” Kohana looks for the welcome controller (controller_welcome), if it does not find it it will throw the first exception (“Class controller_welcome does not exist”). If it does find it it will then look for the hello method in the welcome controller (action_hello) and it it does not find that it will throw the second exception (“Method action_hello does not exist”).
So the default routing behavior is like this: “<controller>/<action>”. If you want people to be able to request the page “” you would have to create the messages controller (controller_messages) and a show method (action_show), to avoid getting exceptions.
What about arguments? What if you want to show a certain message like this ““? In a case like this the default routing behavior if to pass the last part of the URI as an argument to the action method. The above request would thus result in the following call “controller_messages::show(54632)”.
We cam summarize again that the default behavior is this: “<controller>/<action>/<argument>”.
Hopefully this makes sense. Let’s try this by creating the necessary code to be able to respond to the above request: ““.
If you have a look in your application folder at “[application_root]/application” you will see the following: 
We must place all our controllers in “[application_root]/application/classes/controller/”. As you can see there is already a file name “welcome.php” in there. This makes sense because as we saw above the welcome controller already seems to be implemented. So let’s create our messages controller. Create a new file named “messages.php” in the “classes/controller/” folder. You will get a feel for the file placement convention as you go along. Here is the code for the messages controller:


<?php defined('SYSPATH') or die('No direct script access.');

class Controller_Messages extends Controller {

 public function action_show($id = NULL)
  if($id == NULL)
   echo "no argument given";
   echo "show message with id: $id";

} // End Messages

The first line

<?php defined('SYSPATH') or die('No direct script access.');

is simply there to avoid direct access to this file (this should never happen, but it is an extra security measure). Make sure you put this at the top of all your files.

Next we define the class:

class Controller_Messages extends Controller {

This if the Kohana Naming convention (actually it also has something to do with where you place your files, but we will get to that a bit further down the road). Here we have created a class “Controller_Messages” that extends “Controller” (part of the Kohana core). Next we define out method (“messages/show/”):

public function action_show($id = NULL)
 if($id == NULL)
  echo "no argument given";
  echo "show message with id: $id";

The method takes an argument (“messages/show/9548″) or makes it NULL if none is passed (“messages/show/”). The rest of the code is pretty self explanatory. Now try going to “http://localhost/messages/show/” and then “http://localhost/messages/show/439834“. Hopefully everything will work as expected. Here you have seen the most basic way of creating a controller and a corresponding request method.

One last thing. What if you go to http://localhost/messages/showall“? Well it will throw a “method not found” exception and that makes sense. But what about http://localhost/messages/“? That will also throw the above exception, but how should we name this method? Well this depends on the framework convention again: in kohana if no is given in the URI the default method “action_index” will be called. We have not defined this yet. So let’s quickly do that. Here is the final code for the messages controller:


<?php defined('SYSPATH') or die('No direct script access.');

class Controller_Messages extends Controller {

 public function action_index()
  echo "no <action> given";

 public function action_show($id = NULL)
  if($id == NULL)
   echo "no argument given";
   echo "show message with id: $id";

} // End Messages

Try it out and see if it works. Hopefully it does . This is meant to be a short introduction to the basic use of controllers in MVC frameworks, which you will need to be able to follow the next parts of this tutorial series.

LAMP - What Is It

Short for Linux, Apache, MySQL and PHP, an open-source Web development platform, also called a Web stack, that uses Linux as the operating system, Apache as the Web server, MySQL as the RDBMS and PHP as the object-oriented scripting language. Perl or Python is often substituted for PHP.

The key to the idea behind LAMP, a term originally coined by Michael Kunze in the German magazine c't in 1998, is the use of these items together. Although not actually designed to work together, these open source software alternatives are readily and freely available as each of the components in the LAMP stack is an example of Free or Open Source Software (FOSS).

LAMP has become a de facto development standard. Today, the products that make up the LAMP stack are included by default in nearly all Linux distributions, and together they make a powerful web application platform.

The original LAMP acronym has spawned a number of other, related acronyms that capitalize on the main focus of the original combination of technologies to provide feature rich Web sites. Some of these related Web stacks include LAPP, MAMP, and BAMP.

Lemon POS What Is It

Lemon is an open source point of sale for linux and other unix systems. It is targeted for the small and medium sized business, and has been conceived for ease of use and customization.

It allows to change the look by editing a CSS file and making personalized images. This, to provide a modern good-looking interface to impact the user and the client that looks at it.

It is considered a general point of sale, not focused to a specific sector. It can be used at a general store, a fast-food restaurant or a book store.

This guide is for lemonPOS 0.9.3

This is a users guide to Lemon and Squeeze it is to help you get familiar with the software. It is an open source software program so changes over time will occur as new features are add. Be sure that you have the most up-to-date guide because some features may not have the same name or location. Lemon and Squeeze are an open source Point of Sale(POS) and Administration software targeted for small and medium businesses. It is designed to be easy to use, yet powerful and flexible. They let you track inventory, print stock reports and sales reports, use bar-code scanners, set prices, check-in purchases, set promotional discounts and search your inventory. It uses MySQL(database) for data management and storage. MySQL can be used as single database with multiple POS terminals over a network.

Note: This program is mainly developed in a Kubuntu enviroment for the easiest install use Kubuntu and the deb package found at

OS Configuration Tips

This section has tips to help you set up you computer using Lemon because some computer configurations can make it easier for the users.

Monitor Settings

When setting up Lemon it is best to run in 1024x768(native) or higher resolution. To check if your are using the right resolution:

Kubuntu: Click on [ K ] menu then [ Applications ] then [ Settings ] then [ System Settings ] and then [ Display and Monitor ].

Turning off Effects

Some operating systems come with pre-installed desktop effects we recommend turning these off because it will give you a little better performance.

Kubuntu: To turn off your effect settings click on [ K ] menu then [ Applications ] then [ Settings ] then [ System Settings ] and then [ Desktop Effects ]. In the general tab there is a check mark for Enable desktop effects un-check it then click Apply.

You can also in your [ System Settings ] click on [ Workspace Appearance ] then [ Desktop Theme ] and set it to Air for netbooks then click Apply.

One last thing in your [ System Settings ] click on [ Window Behavior ] then [ Virtual Desktops ] and under the Desktops tab Numbers of Desktops: set to 1 then click Apply.

Mouse Options

Setting your mouse to the Double-click to open files and folders option will make it easier for selecting products, venders, clients, etc. in Squeeze when you need to remove them. If you are using a touch screen on a client(register) computer we don't recommend changing this use this on a computer with a mouse or an administration computer.

Kubuntu: To check your resolution click on [ K ] menu then [ Applications ] then [ Settings ] then [ System Settings ] then [ Input Devices ] and then [ Mouse ]. In the general tab you will it will say Icons there will be two options select Double-click to open files and folders(select icons on first click).

Keyboard Options

For those people running just a touch screen at this point Lemon does not have an integrated on screen keyboard but is you are running Kubuntu you can click on the [ K ] menu then [ Applications ] then [ Utilities ] and then [ Virtual Keyboard ]. This will give you an on screen keyboard. One more setting you want is in the Configure (wrench icon) click Dock Keyboard and you should see a small keyboard appear in the top left corner this will let you close and open the big keyboard so it is not in the way. You can move the small keyboard icon anywhere on the screen.

Hiding Desktop Panel

This option will hide the desktop panel (where the K menu and the date and time seen) to make your desktop look a cleaner while running Lemon or Squeeze.

Kubuntu: To hide the desktop panel right-click on the panel you would like to hide then move the mouse over Panel Options and another menu will show and click [ Panel Settings ]. Now you want to click on the where it says [ More Settings ] it show another menu and you want to click on Auto-hide then click on the [ X ] next to the [ More Settings ] button to close.

Adding Lemon to Favorites

This option will make it easier to find Lemon when you want to start using it.

Kubuntu: To add Lemon to your Favorites menu found when you first open your K menu click on the [ K ] menu then [ Applications ] then if you installed using the repository then click [ Office ]. If you installed using the deb package then [ Lost & Found ]. Find Lemon Point of sale in that menu and right-click and click on [ Add to Favorites ].

Note for Kubuntu users if you turn you computer off at night leave Lemon open because it should open up when you turn the computer back on.

Changing Language

LemonPOS is already translated to English, Spanish, Chinese, Brazilian Portuguese, Catalan, French, and German.

Spanish is the most complete translation for lemonPOS.

Kubuntu: Global settings for kde: Start by clicking on the [ K ] menu then [ Applications ] then [ Settings ] then [ System Settings ] then [ Help ] then [ Switch Application Language ]. Change the 'Primary Language' to you language then click [ Ok ] and restart your computer.

Note: You can set squeeze under the help menu on the tool bar but this will not set Lemon.

Note: You need to setup your language in kde, using systemsettings tool (install it first).

Note: If you compiled lemonPOS, then before compiling you must install the "gettext" package.

Changing Currency

Depending on your installed OS you made need to change the currency, Lemon uses the currency default of your computer.

Kubuntu: Start by clicking on the [ K ] menu then [ Applications ] then [ Settings ] then [ System Settings ] then [ Locale ]. You will want to click on the tab that says [ Money ] and under the currency: field you can choose you currency and then click [ Apply ].

Note to regular Ubuntu users you will need to install kde-systemsettings package to preform this task.

Getting started

Once Lemon and MySQL are install, you can start using Lemon. As you may notice there are no products in the database; all you should see is nothing more than the administrator user, a general client, and some other default data. To start using and testing the POS you need to populate the database with some data. To do this tasks you will need to use Lemon's administration program Squeeze. If this is your first install then you need to have a database set up see Database Creation.

To run Squeeze find the program icon in the applications menu (it may sometimes show up in the Lost & Found menu in Kubuntu) or by running from a shell (command line) by typing Squeeze.

The default user and password

user-name: admin
password: linux

Adding a vendor

You should start by adding a vendor(user), by clicking on the button labeled [ Users ] and then by clicking on the [ Add User ] button in the bottom-right corner. Fill out the form with the correct information (you can add a photo, it will be scaled to an appropriate size) and then click [ OK ]. To edit you can double click on the vendor you would like to edit or single click to select the vendor and then click edit. Make sure to ether change the password on the vendor named “admin” or delete the vendor.

Note: User Roles are explained in the 'Definitions' section of this guide under User Role.

Adding a client

Clients are optional, if you have no need for clients then just use the default (you need at least one). Clients can have their own discount, accumulate points (for a loyalty program) and of course you can have their data such as address and phone for referencing. Adding a client is the same as adding a vendor. To add a client click on [ Clients ] the then click on [ Add Client ] in the bottom right corner. Fill out the form with the appropriate information and then click [ OK ]. To edit you can double click on the client you would like to edit or single click to select the client and then click edit.

Adding categories

Categories are for organizing products, you can have as many categories as you would like (they help with tracking products). The default category is the General and can be renamed if desired by double clicking on it. To create a new category, click on the button labeled [ Categories ] and then the click the [ Add Category ] button in the bottom-right corner. Fill in with the appropriate label and then click [ OK ]. To edit you can double click on the category you would like to edit.

Adding weights and measures

Measures are the way of labeling a products sale amount (lb, kg, each, pack, etc.) The default measure is the Piece (Pc) and you can change the text by double clicking on it. Measures can be integers or real numbers, the default Pc is an integer. To create a new one, click on the button labeled [ Weights and Measures ] and then click on the [ Add Measure ] button in the bottom-right corner. Then type in the name of your new measurement and then click [ OK ].

Adding a product

Products are the most important information stored in your database because these are the products witch you sell. It is important your information is accurate about the product because later you will need to recall it for use. To create a new product, click on the button labeled [ Products ] and then click on [ Add Product ] in the bottom-right corner. Fill in with the appropriate information using the tab key to navigate to each field and then click [ OK ]. If you have no tax enter 0 in both tax fields.

Note: There are some tips on setting up an inventory for the first time in the 'First Time?' section.

Adding a product(detailed)

First be sure to add a bar-code if any. Then press the tab key to move to the next field and type in a brief description. Then add how many you have on hand (purchased). Then add the number of points earned by purchase (only if using point system). Next add the category that item belongs to (Pepsi, Coca Cola, Meat, Milk, etc.). Next add a Sold by for the product. You will now need to enter the product price information (if you have no tax enter 0 in both fields). The Profit will calculate and give you a Public price based on a net profit margin:

(Cost + % of Cost) + Tax 1 + Tax 2 = Final Price

Example Cost + 25%: ($1.00 + .25) + 10% + 0 = $1.37

Code, Description, Purchase Qty, Category and Cost are self explanatory, but I will explain the others.

There are also Raw Product and Group/Pack that are explained in the definitions section of this guide.

Check In

Check-In is the process to which you add products from your vendors into you inventory database. When you enter a products bar-code that is already in your systems database the products information will be automatically filled into the check-in list. You will then need to enter in the number of items you have purchased then click [ add this item ]. If a product is not in your database you can add the new product while you are checking in. If you are adding a new item fill in all the information for your new product then click [ add this item ] (make sure to fill in the purchase amount). In the bottom table you can see all products to be add into your inventory database.

Note to be added Boxed product purchase.

Starting Lemon

Now that you have products, you can start selling with lemon, so lets start it to run Lemon find the program icon in the applications menu under Office (it maybe in the Lost & Found menu in Kubuntu) or by running from a shell (command line) by typing Lemon.

At the log-in screen, you can type your user-name and password you created or you can use the administrator user. You will now see Lemon is all grayed out except for [ Log in ], [ Start Operation ], [ Configure lemon ], and [ Quit ].

Configuring Lemon

Once logged into Lemon you will now see Lemon is all grayed out except for [ Log in ], [ Start Operation ], [ Configure lemon ], and [ Quit ]. Click on [ Configure lemon ]. In the Configre – lemon window there are 6 buttons on the left General, Store, Database, appearance, Security, and Printer you can find more information on these in the definitions section under Configure-lemon. Lets start in [ General ]. Put the terminal number you want in field where it says This is the terminal number. In the area that says Low Stock you can type in the minimum number in stock you want any item to get (this is over all). In the area that says Drawer Cash Level you can type in the minimum amount of cash you want in the draw. Next click on [ Store ]. This is where you can enter in your store's information that you want on your receipt. Next you will want to click on [ Printer ] and set up your printing options (if you do not have a printer you can save your end of the day reports to PDF). Now click [ OK ]

Starting operations

To start using Lemon you need to click on the [ Start Operation ] button. You should get a window that asks what your draws balance is (this will help keep track of your cash). Now everything should be operational it being you first time you should see the search window, total, sales window, and all the buttons should be colored in.

Setting up Lemons window

Sometimes when you open Lemon for the first time it may not look like it fits so you will want to adjust some things. First start by hiding the product grid by pressing Ctrl+P on your key board. This will hide the products if you want the grid back just press Ctrl+P to un-hide it. Next where the icons are right clicking on them will bring up a menu. Move you mouse over Text Position and then click on [ Icons Only ]. Your window may still not look correct but if you left click just above where it says code and drag you mouse to the top of your screen and let go it should adjust (If not drag to the left side of your screen and let go the to top).

Your first sale

To perform your first sale you can ether scan an item into the field that says bar-code or you can find the item in the search window and double click on it. Once you have finished scanning all the items you can then click on the field that say cash amount or press + and type in the dollar amount they are paying with. You should notice that Lemon will tell you how much change to give back and then when you press enter you should see a print screen pop up press Enter to print receipt or cancel.

Using weights

In order to sell a scaled/weighted product in Lemon you must first weight the item on an external scale that uses tenths of a pound or grams. Then in the barcode area type the weight (up to 0.00001) and then type an asterisk symbol ( * ) then enter the bar code or alpha code(new 0.9.4rc4).

Example: 1.25*32132100 or 1.25*fruit(new 0.9.4rc4)

Other useful tasks for vendors

Searching products [ F3 ]: You can search for a product by typing into the field provided. It will search by any letters found in the words.

Example: typing – mo - could bring up almonds or moose

Incrementing a product in the list: If you type the quantity then ( * ) then the bar-code it will add that number of items.

Example: 2*03422104 = 2 candy bars

Decrementing a product in the list [ Ctrl + minus ( - ) ]: You can delete an item out of a transaction.

Deleting products in the list [ Ctrl + minus ( - ) ]: You can delete an item out of a transaction.

Canceling an in-progress transaction [ F10 ]: This will cancel your current transaction.

Canceling a completed transaction [ F11 ]: This will let you cancel a past receipt.

Reprint tickets [ F5 ]: This will let you reprint a receipt.

Cash Available in the drawer [ F6 ]: This will tell you how much cash is in the register draw.

Price Checker [ F9 ]: This will check the price without ringing in the product.

Cash In [ F8 ]: If for some reason you run out of cash this will let you put money in and give a reason.

Cash Out (cash drop) [ F7 ]: If you want to pull cash out of the draw because there is to much

Make a Balance for the vendor

End of day report

This will print a days summery report of what transactions happened. It will save out as a PDF file saved into the Documents folder which is located inside the Quick access browser folder (the one with the star next to the K menu)

Some other administrative tasks

Some other useful administration tasks that you can use in Squeeze. (To be added)

Quick Information Plots

Transaction Reports

Balances Reports

Cash Flow Reports

Configuring Squeeze

Keyboard Shortcuts

These are keyboard shortcuts that let you press keys on the keyboard to get to options and menus faster without having to navigate to them or click on them with the mouse.

General shortcut

Alt + an underlined letter (File, Data, Reports, Add User, Delete Selected, etc.)

Lemon short cut keys

Start operations – Ctrl + N
Focus code input box - F2
Payment method – CASH – Alt + S
Payment method – CARD – Alt + R
Focus the payment amount – Alt + A
Search products - F3
Show products grid - Ctrl+P
Deleting a product from list – Ctrl+minus ( - )
Cancelling current transaction - F10
Cancelling a ticket - F11
Reprint tickets - F5
Show price checker - F9
Cash Available in drawer - F6
Cash Out - F7
Cash In - F8
Balance – Ctrl + B
End of day report – Ctrl + W
Log in / log out – Ctrl + L
Add Special Order - Pg Up
Complete a Special Order - Pg Down
Change Special Order Status – Ctrl + PgUp
Lock Screen – Ctrl + Space
Suspend Sale – Ctrl + Backspace
Resume Suspended Sale – Ctrl + R
Apply an occassional Discount - Ctrl-D

Squeeze short cut keys

Log in – Ctrl + L
Browse products – Ctrl + P
Browse offers – Ctrl + O
Browse categories – Ctrl + C
Browse weight and measures – Ctrl + M
Browse users – Ctrl + U
Browse clients – Ctrl + I
Browse transactions – Ctrl + T
Browse balances – Ctrl + B
Browse cashflow – Ctrl + F
Check In - F2
Check Out - F3
Stock Correction - F4


General Definitions

Low security mode:Enabled it does not ask for user/password when exiting.

Raw Product:This are products used only for Special Orders, they cannot be sold separately. This are the pieces/ingredients to make the Special Order. (These cannot be used in group/packs they are only for specail orders.)

Special orders: are products that need to be prepared/manufactured/assembled/cooked/etc... and delivered (or picked-up by the client at store) at some time after the order is taken. It can be partial paid (to make the full payment at delivery/pick-up).

Group Product:These are products that group other products together. It has a barcode/code/alpha-code to be identified, the price is AUTO calculated based on the components prices (applying a discount if desired) and taxes. A product can be asigned to multiple groups. (These cannot contain a grouped product or RAW products.)

For example a "Combo" formed by "1 hamburger, 1 coke, 1 slice of cake". Another example is "two bottles of water". All the components of the combo must exists before creating the group/pack/combo.

User Role:

Vendor: Only Sell
Supervisor: Authorize occasional discounts, authorize ticket re-print (new, since version 0.9.4), authorize product remove from the "cart" (if configured to do so), Authorize Start Operations, Authorize Cash-in/Cash-out, Configure lemon. Can use squeeze to edit users and clients only. Also can sell.
Administrator: Full privileges.

Purchase: This is the number of items you have in your inventory

Points: These are points used for the loyalty program they are added to the clients total points when a client buys a product with points.

Sold by: This is the Weight/Measure the product is sold by.

Tax 1 and Tax 2: This is the tax for the product, in percentage. Depending on the configuration (Add Taxes) the taxes will be calculated and showed for informative purposes or will be added to the total sale. AddTaxes = True means that the price does not include taxes, AddTaxes = False means that the price already includes the taxes, and the taxes are calculated only for informative purposes. The AddTaxes=True is what is used in the U.S., you see a price tag and when you pay the sale taxes are added. Also with this information plus the profit field the price can be auto calculated in squeeze's product editor.

Public price: This is the price to be sold, with taxes included. This price can also be calculated with the Profit and taxes fields.

Profit: This field is for calculating the public price with the desired net profit margin, also taxes are taken into account; after entering cost and taxs press the $ button to calculate price.

Configure-lemon Definitions


This has general setting for the terminals look and warnings

Dialogs and Authorizations:
Show a dialog when printing ticket: this toggles if you will see a receipt appear on screen when the receipt is printing.
Time showing the dialog: This is how long the printing receipt will show on the screen.
Require authorization to delete item from shopping list: With this checked it will ask for a supervisor or administrator's long in and password to delete and item out of the shopping list.
Products Grid:
Show products grid: This will hide the product grid when you start Lemon.
Low Stock:
Minimum value for alert: This will show an alert for when any item gets down to the specified amount.
Draw Cash Level:
Display warning message when level is lower than: This will do what it says. When the cash in you draw is lower then the dollar amount you specify Lemon will warn you.
Add taxes to sale (not included in price): This will add taxes to the total price not each item.


This is where you can enter in your store's information that you want on your receipt.


For setting up your database that Lemon will be using.


For configuring the theme of Lemon


For configuring what needs a password to use.


For setting up your printing options.

First Time?

Is this your first time setting up a POS system? If so these are some help tips to help you get started off in the right direction.


If you have a business that is in operation you do not want to have to stop everything just to input item bar codes and item descriptions for 24 hours. You can over time add product codes and information while you have some down time by in Squeeze adding the item (1.5 Adding products) and putting into the purchase field 1 and filling in all the other information. Then when you are ready to change over to using your new POS system you can use in Squeeze the use the purchase function (1.6 Checking in) to add the rest. Purchase will automatically fill in the information that you have already filled in and all you have to do is count what is on the shelf and minus 1 (because you put one in already). Example: You have 3 bottles of Coke on the self so 3-1=2 in the purchase put 2 and then click [ OK ] it will add it to the 1 you have in you system already.

Note you can do this without adding a product it will just say "out of stock"

Database Creation

For the first time, you have to create a database. Database is used to store all things related to the operation of the POS, like sales, users, and products.

Remember that MySQL should be running and configured before creating the database, and before running LemonPOS.

To create database, run the script(using terminal) included inside lemonpos/database_resources/ or on recent releases this file is located at /usr/share/kde4/apps/lemon/

Open your terminal program and type(if you used repository or deb package): cd /usr/share/kde4/apps/lemon/

then type: cat lemon_mysql.sql | mysql -u root -p

Open your terminal program and type (if you compiled it yourself): cd lemonpos/database_resources/

then type: cat lemon_mysql.sql | mysql -u root -p

Note 1:If you are updating lemonPOS, you may look in the release README and/or in the lemonpos/database_resources/README for instructions to update the database if needed. When a fix is provided the file looks like "fixme_VERSION.sql" where VERSION is the version you may have installed and will update, for example for 0.9.3rc2, the file is "fixme_0.9.2.sql", meaning that you must have 0.9.2 version installed and upgrading to 0.9.3rc2.

Note 2: If you do not know what "fix_sql" files you need when you are in terminal you have opened your "lemonpos/database_resources" file or "/usr/share/kde4/apps/lemon/" file type ls and then press enter, that will show you all the database updates. If you are installing LemonPOS for the first time from git or using natty_rc8 you will need to run all updates for 0.9.3 and ones that have names.

Database Back up

It is good practice to back up your database information over time and after a large change. To back up your database open terminal(Linux), command prompt(Windows) in order to backup your database.

Backing up a database type: mysqldump -u root -p database_name > name_of_back_up.sql

Example: mysqldump -u root -p lemondb > lemon_backup.sql

If you do not know the database name type show databases;

Also you can back up all databases if you have multiple: mysqldump -u root -p --all-database

Note: The file will save in the file it says you are located in your command line

Example: C:\Users\Jon Dough> Means: in the "C:\" drive in the "Users" file \ in the "Jon Dough" User file (To change file save location see "changing save location")

Database Restore

It is good practice to back up your database information over time and after a large change (see Database Back up). If you have a backed up database and would like to restore is first you need to open terminal(Linux), command prompt(Windows). Locate move to the file location were the backed up file is located(see Changing file location) .

To restore a database type: mysql -u root -p -h DBSERVER dbname < dbname.sql

(localhost is the default DBSERVER name)

Example: mysql -u root -p -h localhost lemondb < lemon_backup.sql

If you do not know the database name to see databases log in to mysql: mysql -u root -p

Then: (Enter your password)

Then type: show databases; (Do not forget the ; at the end)

How to delete a database: echo "drop database lemondb" | mysql -uroot -p

Changing file location(extra database help)

Sometimes you would like to change directory (dir) or file using terminal(Linux), command prompt(Windows). First you need to determine were you would like to save your file or where your file is located. Once you have located your file you need move into its location. If it is on an external hard drive or flash drive you will need to go there first. Open terminal(Linux), command prompt(Windows). It should show up something like this.

terminal(Linux): office@server:~$

command prompt(Windows): C:\Users\Office>

Now remember the drive letter this is were you need to type it in.

terminal(Linux): office@server:~$ cd /f

command prompt(Windows): C:\User\office> f:

Then you need to navigate to the folder you would like to save into. If you do not already have a folder you can make one using mkdir folder_name(name it something easy to identify what is it).

terminal(Linux): office@server:~/F$ cd folder_name

command prompt(Windows): F:> cd folder_name

LibreCAD - Worth Keeping An Eye On It

LibreCAD, a 2D CAD drawing tool has reached version 1.0 more than a year of work. LibreCAD, previously known as CADuntu, works natively on Mac OSX, Windows and Linux and is based on the community edition of QCad.
Among the changes between QCad Community Edition and LibreCAD are: up to date Qt4 based user interface (QCAD Community Edition uses Qt3), plug-in system, autosaving and better reading of DXF files. Also, QCAD Community Edition is officially only available for Linux (and QCAD itself is cross-platform, but it's not free), while LibreCAD is cross-platform.
With this release, LibreCAD is finally considered stable, but it still needs a lot of work, so don't expect it to compete with professional applications like AutoCAD.  Further more, LibreCAD doesn't come with documentation - it initially included QCAD's documentation but it had to be removed because it wasn't published under GPL.

Also, the application doesn't support .dwg files yet. There was some work on this, but it hasn't been implemented because of some license issues with LibreDWG.

But the development doesn't stop here. The LibreCAD v2 branch is already work in progress (PPA at the end of the post) and it includes many new features such as:

  • new snapping system
  • isometric grids
  • trisecting an angle
  • drawing inscribed circles and ellipses
  • drawing common tangent lines for two ellipses
  • better international fonts support, including CJK
  • experimental offset support
  • better performance

Read More - Click Here!


LibreOffice Impress Embeded Video

( @ SMB Technologist) Power users of presentation applications go beyond the standard-issue text and fancy backgrounds for their slides. Their presentations frequently include sounds, images, and sometimes video. Fortunately, with the open source flagship office suite LibreOffice, embedding videos into a presentation is as simple as adding text to a slide. I'll demonstrate how simple it is to embed video into a LibreOffice Impress slide. 

A caveat: The only format that is readily supported is the Ogg Vorbis video. Fear not -- there are plenty of ways to convert nearly any format into an Ogg Vorbis video. I will first show one method of converting your video, and then we'll embed that video into a presentation.

Converting your video format

My conversion method of choice is done with the help of OpenShot Video Editor. With this editor, you can import nearly any format and export it to almost any format. Here's how to get your video into a suitable format for embedding into a presentation:


  1. Open OpenShot.
  2. Go to File | Import Files and add your file.
  3. Go to File | Export Video and in the Export window select All Formats from the Profile drop-down (Figure A).
  4. Give the file a name.
  5. Select the Export To Folder location.
  6. Select Ogg (theora/vorbis) from the Target drop-down.
  7. Select DV/DVD NTSC from the Video Profile drop-down.
  8. Select either Med or High for Quality.
  9. Click the Export Video button.


Figure A



Exporting your video to the Ogg Vorbis format.

Embedding the video into a presentation

The size of your video will seriously affect the size of your overall presentation; it will also dictate the how much time is required to handle the actual embedding. If the file is large (I've embedded files that are larger than 600 MB), Impress might become temporarily unresponsive and look as if it will crash at any moment. Wait for the embedding to complete before force closing the application.

Here's how to embed your video:


  1. Open a presentation.
  2. Go to Insert | Movie and Sound.
  3. Locate the video to be inserted and click Open.


Once embedded (which may take a while), you can test to make sure the video plays by clicking F5, which will start the slide show (Figure B).

Figure B



The video file embedded into a slide.

To give the video-embedded slide more of a seamless look, you might get rid of any text on the page and change the page background to black. To change the background color and give the video-embedded site a cleaner appearance, follow these steps:


  1. Go to Format | Page.
  2. Select Color from the Fill drop-down.
  3. Select Black.
  4. Click OK.
  5. When asked, do this for only this page (not all pages).



The embedding of video into your Impress slide shows is an easy way to dazzle your audience. You could even use this technique to create self-guided presentations, which could be especially useful if you have a paralyzing fear of public speaking.

LibreOffice sees new platforms, more users Online, Android, and iOS versions on way by By Brian Proffitt

In Paris, the LibreOffice Conference is in full swing, with significant news being released, including the news of the launch of online, Android, and iOS versions of the open source office suite.

There have been no formal news releases from the conference yet, and no reporters seem to be on the ground there. But the glimmers of news that are coming out of the conference thus far are quite interesting. What we know is just coming out of the conference via Twitter and, but there's still quite a bit. Here's a round up of the conference announcements thus far:

  • Plans are in the works for a browser-based version of LibreOffice, LibreOffice Online.
  • Ports of LibreOffice to the Android and iOS platforms are in the works.
  • Région Île-de-France (the region where Paris resides and itself a premium sponsor of the conference) will be distributing 800,000 USB keys loaded with LibreOffice and a cloud plugin to that region's students. Parisian students and their families will be getting heavy exposure to the LibreOffice application.


  • The French government will be shifting 500,000 Windows users from to LibreOffice. This will increase the installed base of LibreOffice Windows users by five percent in a single migration.

I reached out and contacted Italo Vignoli, spokesman for The Document Foundation, the German non-profit that manages LibreOffice, for more information on the new versions of LibreOffice. Speaking to me this morning from the conference, Vignoli provided more details on the news.

The LibreOffice Online Prototype, developed by SUSE Linux developer Michael Meeks, will be based on an HTML5 canvas and a GTK+ Broadway framework developed by Red Hat's Alex Larsson. The prototype is not ready for public use yet, but a demo video is available (Note: Your browser needs WebM support, such as Chrome's). Vignoli estimated the Online project will be ready in about a year's time.

The port of LibreOffice to Android and iOS has basically been completed, but only in terms of the code being compiled to the new platforms. Vignoli emphasized that these flavors of LibreOffice were assuredly not ready, because their user interfaces were still based on the current LibreOffice interface and therefore useless in Android and iOS. At least, for now. The LibreOffice ports to Android and iOS is based on the voluntary work of Tor Lillqvist, a SUSE Linux developer from Finland who was instrumentatl in porting GIMP to Windows.

"The LibreOffice Android and iOS port has the objective of bringing the office suite to iPads and Android tablets, and eventually smaller devices," Vignoli said. He added that the user interface conversion has started, but the Document Foundation would certainly welcome the participation of any commercial entity that wanted to expedite the development to get LibreOffice ready for one of these platforms.

Vignoli emphasized more than once that these are still very early projects that will become products sometime in late 2012 or early 2013.

LibreOffice seems to be doing very well in the future-plans department. Any one of these announcements points to a very active LibreOffice community and a growing deployment base. Something will need to re-create, and soon.

Life Without Open Source???

Open Source is the basis for the Internet, Search Engines, Linux, iPod, Wikipedia, Droid (Android)  Firefox, OS X, Apache, PHP, Perl, Python, Ruby (Ruby On Rails), Drupal, Google Code, SSH, Buzilla, Gimp, OpenOffice

The list goes on. But it doesn’t end at software. There are open source hardware projects, open vehicles, open politics. Open source has influenced so many people on so many levels. But let’s just look at it from the geekier perspective. Imagine your day to day life without open source.

Imagine all of your servers were either Microsoft or Unix servers. Imagine you were having to code all of your sites with a proprietary language or, worse yet, using a closed-source application like Dreamweaver. Imagine Web 2.0 never came to be. How about VoIP (Voice over IP) and Skype? Imagine having to pay out for every single Web service and/or application you created or use for your company. Imagine the cost of security and the sum total of your businesses network. Imagine the Google Cloud never came to be. Imagine the Beowulf clusters never existed. Imagine no technology ever threatened the Microsoft monopoly. One Laptop Per Child????

The short-list of possibilities is, indeed, daunting. But ultimately, what it points out is that open source technology has, more so than Microsoft, helped technology become what it has become today. I do not negate the fact that IBM, IBM Clones, and Microsoft brought the PC to the home. But I personally can not imagine how that PC would be today without the community-driven push of open source. And what about my PC life without open source? And this goes beyond having to actually pay for software. This very idea quickly takes a dive into the question, “Would technology A even exist?” I use Wiki all of the time. I use Drupal and Xoops on a daily basis. I can’t imagine what would have come into existence to serve the same purpose as those  technologies (and how much they would cost.) Would there be an equivalent technology had they never been developed?

Linux & Windows - Instant On - Side by Side

With all this talk about Linux vs. Windows, some people are saying \"why choose\",  \"why not both?\"

A trend in portable computers is the use of an \"instant on\" Linux installation that gives you quick access to very basic functions like a media player and maybe a web browser. You can boot into that when your needs are limited, or boot into Windows if you need to do more. This article discusses two products that create this dual environment, HyperSpace and SplashTop: Read More - Click Here!

Linux Add User

Step # 1: Add a user joe to UNIX/Linux system

If the user name is jsmith:

useradd jsmith

passwd jsmith

mkdir /home/jsmith

chown jsmith:users /home/jsmith

Step # 2: Add a user to samba

Now user jsmith has account on Linux/UNIX box. Use smbpasswd command to specifies that the username following should be added to the local smbpasswd file:

smbpasswd -a joesmbpasswd jsmith
# smbpasswd -a joe

Linux Fix Software Errors and Dependancies

  1. Type sudo dpkg --configure -a. It will error out, but don't worry.
  2. Find the files causing the problems and type sudo dpkg -r PACKAGE_NAME for each one to remove it. Some of them will fail to be removed. Add them to a list.
  3. Run sudo dpkg --configure PACKAGE_NAME for each package in your list. It will configure them. This time, you should not see any errors.
  4. Run sudo apt-get install -f to fix dependencies.







Clears the page list.



Opens any format that imagemagick supports. PDFs will have their embedded images extracted and imported one per page.



Sets options before scanning via SANE.



Chooses between available scanners.


# Pages

Selects the number of pages, or all pages to scan.


Source document

Selects between single sided or double sides pages.

This affects the page numbering. Single sided scans are numbered consecutively. Double sided scans are incremented (or decremented, see below) by 2, i.e. 1, 3, 5, etc..


Side to scan

If double sided is selected above, assuming a non-duplex scanner, i.e. a scanner that cannot automatically scan both sides of a page, this determines whether the page number is incremented or decremented by 2.

To scan both sides of three pages, i.e. 6 sides:

  1. Select:

    # Pages = 3 (or "all" if your scanner can detect when it is out of paper)

    Double sided

    Facing side

  2. Scans sides 1, 3 & 5.
  3. Put pile back with scanner ready to scan back of last page.
  4. Select:

    # Pages = 3 (or "all" if your scanner can detect when it is out of paper)

    Double sided

    Reverse side

  5. Scans sides 6, 4 & 2.
  6. gscan2pdf automatically sorts the pages so that they appear in the correct order.


Device-dependent options

These, naturally, depend on your scanner. They can include

Page size.
Mode (colour/black & white/greyscale)
Resolution (in PPI)

Guarantees that a "no documents" condition will be returned after the last scanned page, to prevent endless flatbed scans after a batch scan.


After sending the scan command, wait until the button on the scanner is pressed before actually starting the scan process.


Selects the document source. Possible options can include Flatbed or ADF. On some scanners, this is the only way of generating an out-of-documents signal.



Saves the selected or all pages as a PDF, DjVu, TIFF, PNG, JPEG, PNM or GIF.


PDF Metadata

Metadata are information that are not visible when viewing the PDF, but are embedded in the file and so searchable and can be examined, typically with the "Properties" option of the PDF viewer.

The metadata are completely optional, but can also be used to generate the filename see preferences for details.



Both black and white, and colour images produce better compression than PDF. See for more details.


Email as PDF

Attaches the selected or all pages as a PDF to a blank email. This requires xdg-email, which is in the xdg-utils package. If this is not present, the option is ghosted out.



Prints the selected or all pages.


Compress temporary files

If your temporary ($TMPDIR) directory is getting full, this function can be useful - compressing all images at LZW-compressed TIFFs. These require much less space than the PNM files that are typically produced by SANE or by importing a PDF.





Deletes the selected page.



Renumbers the pages from 1..n.

Note that the page order can also be changed by drag and drop in the thumbnail view.



The select menus can be used to select, all, even, odd, blank, dark or modified pages. Selecting blank or dark pages runs imagemagick to make the decision. Selecting modified pages selects those which have modified by threshold, unsharp, etc., since the last OCR run was made.



The preferences menu item allows the control of the default behviour of various functions. Most of these are self-explanatory.



gscan2pdf supports two frontends, scanimage and scanadf. scanadf support was added when it was realised that scanadf works better than scanimage with some scanners. On Debian-based systems, scanadf is in the sane package, not, like scanimage, in sane-utils. If scanadf is not present, the option is obviously ghosted out.

In 0.9.27, Perl bindings for SANE were introduced and two further frontends, scanimage-perl and scanadf-perl (scanimage and scanadf transliterated from C into Perl) were added.


Default filename for PDF files

The following variables are available, which are replaced by the corresponding metadata:

 %a     author
 %t     title
 %y     document's year
 %Y     today's year
 %m     document's month
 %M     today's month
 %d     document's day
 %D     today's day




Zoom 100%

Zooms to 1:1. How this appears depends on the desktop resolution.


Zoom to fit

Scales the view such that all the page is visible.


Zoom in


Zoom out


Rotate 90 clockwise

The rotate options require the package imagemagick and, if this is not present, are ghosted out.


Rotate 180


Rotate 90 anticlockwise





Changes all pixels darker than the given value to black; all others become white.


Unsharp mask

The unsharp option sharpens an image. The image is convolved with a Gaussian operator of the given radius and standard deviation (sigma). For reasonable results, radius should be larger than sigma. Use a radius of 0 to have the method select a suitable radius.





unpaper (see is a utility for cleaning up a scan.


OCR (Optical Character Recognition)

The gocr, tesseract, ocropus or cuneiform utilities are used to produce text from an image.

There is an OCR output buffer for each page and is embedded as plain text behind the scanned image in the PDF produced. This way, Beagle can index (i.e. search) the plain text.

In DjVu files, the OCR output buffer is embedded in the hidden text layer. Thus these can also be indexed by Beagle.

There is an interesting review of OCR software at An important conclusion was that 400ppi is necessary for decent results.

Up to v2.04, the only way to tell which languages were available to tesseract was to look for the language files. Therefore, gscan2pdf checks the path returned by:

 tesseract '' '' -l ''

If there are no language files in the above location, then gscan2pdf assumes that tesseract v1.0 is installed, which had no language files.


Variables for user-defined tools

The following variables are available:

 %i     input filename
 %o     output filename
 %r     resolution

An image can be modified in-place by just specifying %i.




Why isn't option xyz available in the scan window?

Possibly because SANE or your scanner doesn't support it.

If an option listed in the output of scanimage --help that you would like to use isn't available, send me the output and I will look at implementing it.


I've only got an old flatbed scanner with no automatic sheetfeeder. How do I scan a multipage document?

If you are lucky, you have an option like Wait-for-button or Button-wait, where the scanner will wait for you to press the scan button on the device before it starts the scan, allowing you to scan multiple pages without touching the computer.

Otherwise, you have to set the number of pages to scan to 1 and hit the scan button on the scan window for each page.


Why is option xyz ghosted out?

Probably because the package required for that option is not installed. Email as PDF requires xdg-email (xdg-utils), unpaper and the rotate options require imagemagick.


Why can I not scan from the flatbed of my HP scanner?

Generally for HP scanners with an ADF, to scan from the flatbed, you should set "# Pages" to "1", and possibly "Batch scan" to "No".


When I update gscan2pdf using the Update Manager in Ubuntu, why is the list of changes never displayed?

As far as I can tell, this is pulled from, and therefore only the changelogs from official Ubuntu builds are displayed.


Why can gscan2pdf not find my scanner?

If your scanner is not connected directly to the machine on which you are running gscan2pdf and you have not installed the SANE daemon, saned, gscan2pdf cannot automatically find it. In this case, you can specify the scanner device on the command line:

gscan2pdf --device <device>

Linux Has Not Caught On Because Of No Marketing

Oh ya, it's caught on in IT departments around the globe. But why not the desktop or small office server? Could it be that because Linux is a FREE OS (operating system) nobody will market it?

We all know that consumerism is fueled by marketing. Why even some bad products become best sellers because of dynamic marketing campaigns. When Microsoft became THE OS of choice on the IBM PC, it didn’t even work!. Excel would supercede Supercalc and Lotus 123 – NO WAY! Access VS Dbase – Not Even!

I honestly believe that lack of marketing is the biggest issues keeping Linux from dominating the desktop and small office server. Think about it; Jane consumer is shopping for a new operating system because hers finally died. She goes to the store and sees, sitting next to one another, Windows 7 for roughly $200.00 USD and Debian Linux for FREE. Which is she going to choose? Even without knowing the difference, any consumer (unless a Windows fanatic) will opt to save roughly 200.00. Jane would probably pick Linux, and even if it didn’t work out, she could always go back and plunk down 200 bones for Windows 7; nothing to loose.

But that's not happening. It's not happening because there is no shiny Linux box, 30 second TV spot and drive-time radio ad. Nobody is going to spend big bucks for a well orchestrated viral marketing campaign on a FREE product. There’s no money in it! Mean time, Linux is not happening because the public has no idea of the savings they are missing on so many levels:.


No viruses

No malware

No OS crashes


No Rebooting every day

No insane hardware requirements

No EULs that are nothing but legal-speak




Cost per user for Microsoft VS Linux – How many users do you have?


Windows Desktop License: 199.00

Windows Network License: 99.00

Windows Email License 49.00

Microsoft Office: 449.00

Total Cost Microsoft: 796.00

Linux Desktop License: 0.00

Linux Network License: 0.00

Revolution Email License 0.00

Open 0.00

Total Cost Linux : 0.00


They say “follow the money trail”. Well there is none, and that’s why Linux has not caught on!



Linux LS Command Examples

( @ TechMint) ls command is one of the most frequently used command in Linux. I believe ls command is the first command you may use when you get into the command prompt of Linux Box. We use ls command daily basis and frequently even though we may not aware and never use all the option available. In this article, we’ll be discussing basic ls command where we have tried to cover as much parameters as possible.Linux ls Command

1. List Files using ls with no option

ls with no option list files and directories in bare format where we won’t be able to view details like file types, size, modified date and time, permission and links etc.

# ls

0001.pcap        Desktop    Downloads         index.html   install.log.syslog  Pictures  Templates
anaconda-ks.cfg  Documents  fbcmd_update.php  install.log  Music               Public    Videos

2 List Files With option –l

Here, ls -l (-l is character not one) shows file or directory, size, modified date and time, file or folder name and owner of file and it’s permission.

# ls -l

total 176
-rw-r--r--. 1 root root   683 Aug 19 09:59 0001.pcap
-rw-------. 1 root root  1586 Jul 31 02:17 anaconda-ks.cfg
drwxr-xr-x. 2 root root  4096 Jul 31 02:48 Desktop
drwxr-xr-x. 2 root root  4096 Jul 31 02:48 Documents
drwxr-xr-x. 4 root root  4096 Aug 16 02:55 Downloads
-rw-r--r--. 1 root root 21262 Aug 12 12:42 fbcmd_update.php
-rw-r--r--. 1 root root 46701 Jul 31 09:58 index.html
-rw-r--r--. 1 root root 48867 Jul 31 02:17 install.log
-rw-r--r--. 1 root root 11439 Jul 31 02:13 install.log.syslog
drwxr-xr-x. 2 root root  4096 Jul 31 02:48 Music
drwxr-xr-x. 2 root root  4096 Jul 31 02:48 Pictures
drwxr-xr-x. 2 root root  4096 Jul 31 02:48 Public
drwxr-xr-x. 2 root root  4096 Jul 31 02:48 Templates
drwxr-xr-x. 2 root root  4096 Jul 31 02:48 Videos

3. View Hidden Files

List all files including hidden file starting with ‘.‘.

# ls -a

.                .bashrc  Documents         .gconfd          install.log         .nautilus     .pulse-cookie
..               .cache   Downloads         .gnome2          install.log.syslog  .netstat.swp  .recently-used.xbel
0001.pcap        .config  .elinks           .gnome2_private  .kde                .opera        .spice-vdagent
anaconda-ks.cfg  .cshrc   .esd_auth         .gtk-bookmarks   .libreoffice        Pictures      .tcshrc
.bash_history    .dbus    .fbcmd            .gvfs            .local              .pki          Templates
.bash_logout     Desktop  fbcmd_update.php  .ICEauthority    .mozilla            Public        Videos
.bash_profile    .digrc   .gconf            index.html       Music               .pulse        .wireshark

4. List Files with Human Readable Format with option -lh

With combination of -lh option, shows sizes in human readable format.

# ls -lh

total 176K
-rw-r--r--. 1 root root  683 Aug 19 09:59 0001.pcap
-rw-------. 1 root root 1.6K Jul 31 02:17 anaconda-ks.cfg
drwxr-xr-x. 2 root root 4.0K Jul 31 02:48 Desktop
drwxr-xr-x. 2 root root 4.0K Jul 31 02:48 Documents
drwxr-xr-x. 4 root root 4.0K Aug 16 02:55 Downloads
-rw-r--r--. 1 root root  21K Aug 12 12:42 fbcmd_update.php
-rw-r--r--. 1 root root  46K Jul 31 09:58 index.html
-rw-r--r--. 1 root root  48K Jul 31 02:17 install.log
-rw-r--r--. 1 root root  12K Jul 31 02:13 install.log.syslog
drwxr-xr-x. 2 root root 4.0K Jul 31 02:48 Music
drwxr-xr-x. 2 root root 4.0K Jul 31 02:48 Pictures
drwxr-xr-x. 2 root root 4.0K Jul 31 02:48 Public
drwxr-xr-x. 2 root root 4.0K Jul 31 02:48 Templates
drwxr-xr-x. 2 root root 4.0K Jul 31 02:48 Videos

5. List Files and Directories with ‘/’ Character at the end

Using -F option with ls command, will add the ‘/’ Character at the end each directory.

# ls -F

0001.pcap        Desktop/    Downloads/        index.html   install.log.syslog  Pictures/  Templates/
anaconda-ks.cfg  Documents/  fbcmd_update.php  install.log  Music/              Public/    Videos/

6. List Files in Reverse Order

The following command with ls -r option display files and directories in reverse order.

# ls -r

Videos     Public    Music               install.log  fbcmd_update.php  Documents  anaconda-ks.cfg
Templates  Pictures  install.log.syslog  index.html   Downloads         Desktop    0001.pcap

7. Recursively list Sub-Directories

ls -R option will list very long listing directory trees. See an example of output of the command.

# ls -R

total 1384
-rw-------. 1 root     root      33408 Aug  8 17:25 anaconda.log
-rw-------. 1 root     root      30508 Aug  8 17:25 anaconda.program.log

total 132
-rw-r--r--  1 root root     0 Aug 19 03:14 access_log
-rw-r--r--. 1 root root 61916 Aug 10 17:55 access_log-20120812

total 68
-rw-r--r--  1 lighttpd lighttpd  7858 Aug 21 15:26 access.log
-rw-r--r--. 1 lighttpd lighttpd 37531 Aug 17 18:21 access.log-20120819

total 12
-rw-r--r--. 1 root root    0 Aug 12 03:17 access.log
-rw-r--r--. 1 root root  390 Aug 12 03:17 access.log-20120812.gz

8. Reverse Output Order

With combination of -ltr will shows latest modification file or directory date as last.

# ls -ltr

total 176
-rw-r--r--. 1 root root 11439 Jul 31 02:13 install.log.syslog
-rw-r--r--. 1 root root 48867 Jul 31 02:17 install.log
-rw-------. 1 root root  1586 Jul 31 02:17 anaconda-ks.cfg
drwxr-xr-x. 2 root root  4096 Jul 31 02:48 Desktop
drwxr-xr-x. 2 root root  4096 Jul 31 02:48 Videos
drwxr-xr-x. 2 root root  4096 Jul 31 02:48 Templates
drwxr-xr-x. 2 root root  4096 Jul 31 02:48 Public
drwxr-xr-x. 2 root root  4096 Jul 31 02:48 Pictures
drwxr-xr-x. 2 root root  4096 Jul 31 02:48 Music
drwxr-xr-x. 2 root root  4096 Jul 31 02:48 Documents
-rw-r--r--. 1 root root 46701 Jul 31 09:58 index.html
-rw-r--r--. 1 root root 21262 Aug 12 12:42 fbcmd_update.php
drwxr-xr-x. 4 root root  4096 Aug 16 02:55 Downloads
-rw-r--r--. 1 root root   683 Aug 19 09:59 0001.pcap

9. Sort Files by File Size

With combination of -lS displays file size in order, will display big in size first.

# ls -lS

total 176
-rw-r--r--. 1 root root 48867 Jul 31 02:17 install.log
-rw-r--r--. 1 root root 46701 Jul 31 09:58 index.html
-rw-r--r--. 1 root root 21262 Aug 12 12:42 fbcmd_update.php
-rw-r--r--. 1 root root 11439 Jul 31 02:13 install.log.syslog
drwxr-xr-x. 2 root root  4096 Jul 31 02:48 Desktop
drwxr-xr-x. 2 root root  4096 Jul 31 02:48 Documents
drwxr-xr-x. 4 root root  4096 Aug 16 02:55 Downloads
drwxr-xr-x. 2 root root  4096 Jul 31 02:48 Music
drwxr-xr-x. 2 root root  4096 Jul 31 02:48 Pictures
drwxr-xr-x. 2 root root  4096 Jul 31 02:48 Public
drwxr-xr-x. 2 root root  4096 Jul 31 02:48 Templates
drwxr-xr-x. 2 root root  4096 Jul 31 02:48 Videos
-rw-------. 1 root root  1586 Jul 31 02:17 anaconda-ks.cfg
-rw-r--r--. 1 root root   683 Aug 19 09:59 0001.pcap

10. Display Inode number of File or Directory

We can see some number printed before file / directory name. With -i options list file / directory with inode number.

# ls -i

20112 0001.pcap        23610 Documents         23793 index.html          23611 Music     23597 Templates
23564 anaconda-ks.cfg  23595 Downloads            22 install.log         23612 Pictures  23613 Videos
23594 Desktop          23585 fbcmd_update.php     35 install.log.syslog  23601 Public

11. Shows version of ls command

Check version of ls command.

# ls --version

ls (GNU coreutils) 8.4
Copyright (C) 2010 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <>.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Written by Richard M. Stallman and David MacKenzie.

12. Show Help Page

List help page of ls command with their option.

# ls --help

Usage: ls [OPTION]... [FILE]...

13. List Directory Information

With ls -l command list files under directory /tmp. Wherein with -ld parameters displays information of /tmp directory.

# ls -l /tmp
total 408
drwx------. 2 narad narad   4096 Aug  2 02:00 CRX_75DAF8CB7768
-r--------. 1 root  root  384683 Aug  4 12:28 htop-1.0.1.tar.gz
drwx------. 2 root  root    4096 Aug  4 11:20 keyring-6Mfjnk
drwx------. 2 root  root    4096 Aug 16 01:33 keyring-pioZJr
drwx------. 2 gdm   gdm     4096 Aug 21 11:26 orbit-gdm
drwx------. 2 root  root    4096 Aug 19 08:41 pulse-gl6o4ZdxQVrX
drwx------. 2 narad narad   4096 Aug  4 08:16 pulse-UDH76ExwUVoU
drwx------. 2 gdm   gdm     4096 Aug 21 11:26 pulse-wJtcweUCtvhn
-rw-------. 1 root  root     300 Aug 16 03:34 yum_save_tx-2012-08-16-03-34LJTAa1.yumtx
# ls -ld /tmp/

drwxrwxrwt. 13 root root 4096 Aug 21 12:48 /tmp/

14. Display UID and GID of Files

To display UID and GID of files and directories. use option -n with ls command.

# ls -n

total 36
drwxr-xr-x. 2 500 500 4096 Aug  2 01:52 Downloads
drwxr-xr-x. 2 500 500 4096 Aug  2 01:52 Music
drwxr-xr-x. 2 500 500 4096 Aug  2 01:52 Pictures
-rw-rw-r--. 1 500 500   12 Aug 21 13:06 tmp.txt
drwxr-xr-x. 2 500 500 4096 Aug  2 01:52 Videos

15. ls command and it’s Aliases

We have made alias for ls command, when we execute ls command it’ll take -l option by default and display long listing as mentioned earlier.

# alias ls="ls -l"

Note: We can see number of alias available in your system with below alias command and same can be unalias as shown below example.

# alias

alias cp='cp -i'
alias l.='ls -d .* --color=auto'
alias ll='ls -l --color=auto'
alias ls='ls --color=auto'
alias mv='mv -i'
alias rm='rm -i'
alias which='alias | /usr/bin/which --tty-only --read-alias --show-dot --show-tilde'

To remove an alias previously defined, just use the unalias command.

# unalias ls

In our next article we’ll cover up more or advanced ls command with their examples. If we’ve missed anything in the list, please update us via comment section.

Linux OpenDocMan Document Management

(@ TechRepublic) The OpenDocManager open source tool can help your small business get its documentation into a form that is easy to use and to manage.

For many companies, storing documentation on a shared drive is enough. But what if you need a system for check-out/check-in, easy search, departments, and user control? You can go with a full-blown content management system, or you can focus your energy on a single-minded document management system such as OpenDocMan. This free document management system offers these features:

  • Add any file type to the system
  • Upload directly from your browser
  • Metadata fields for each file
  • Departments/categories
  • Check-out/check-in
  • Revision history
  • Documents stored physically on the server
  • File expiration
  • Custom document properties
  • Automated document review process
  • Automated file expiration process
  • Approve or reject a new or changed document
  • E-mail notification
  • Quick search by author, department, or category
  • Full search by metadata, author, department, category, file name, comments, etc.


  • PHP 5/MySQL 5
  • Apache/IIS

A LAMP (Linux Apach MySQL PHP) or a WAMP (Windows Apache MySQL PHP) server will do just fine. In this post, I will be installing on a Ubuntu-based LAMP server. If you install on a WAMP server, you would have to make slight adjustments to the process.

Preparing for installation

With a LAMP server up and running, you need to take care of several tasks prior to installation.

1. Create a database, which you can do with your normal tool (I prefer MySQL Workbench). You will be asked to name the database in the installation process, so be sure to remember the name you give it.

2. Create a data directory outside of the OpenDocManager install directory. For my installation, I created the directory dataDIR in /var/www/ with the command sudo mkdir /var/www/dataDIR.

3. Give the newly created directory write permissions with the command sudo chmod -R ugo+w /var/www/dataDIR.

Installing OpenDocMan

The first step is to download the latest, stable release from the OpenDocMan download page (in either .zip or .gz format). After the file downloads, move it into the document root of your web server (in my case, /var/www/).

Next, open a terminal window and change into the Apache document root. You must unpack the archive file by using a command like sudo tar xvzf opendocman-XXX.tar.gz (XXX is the release number). This will create a new directory called opendocman-XXX (XXX is the release number). I prefer to rename that directory for the sake of simplicity. To do that, issue the command sudo mv opendocman-XXX opendocman (XXX is the release number).

In order to smooth out the installation process, you need to make sure the opendocman folder belongs to the user associated with the web server. For example, if your web server user is apache, you'd want to give ownership of that folder with a command like sudo chown -R apache.apache opendocman. This command should give the installation the right permissions to proceed. Now, open a web browser and point it to http://ADDRESS_TO_SERVER/opendocman (ADDRESS_TO_SERVER is the address to the machine hosting OpenDocManager). This will start the web-based installer.

Since this is a new installation, you will be prompted to click the Create A Configuration File button (Figure A). If you get an error after clicking the button, you need to manually give write permissions to the contents of the opendocman folder. Figure A

Click the image to enlarge.

The next screen will ask you for the database information (Figure B); all of the information requested in this screen is self-explanatory. (If you haven't already created the database, you need to do that first.) Remember to enter everything carefully and that the data directory is the new directory you created outside of the OpenDocManager root directory. Also, be sure to make note of the password you give the admin user (i.e., the only available user upon completion of the installation) because you'll need that to log in.

After you enter this information, click the Next button. If you get a permissions error for the templates_c directory, issue the command sudo chmod -R ugo+w /var/www/opendocman/templates_c.

Figure B

Click the image to enlarge.

Click the Run The Installer button for the installation to complete. After you click the Click Here link that appears, you will go to a login page. Log in with the user admin and the password you created during installation.

Congratulations! You have a working installation of OpenDocManager.

Post installation

You need to go back to the terminal window and issue the command sudo rm -rf /var/www/opendocman/install. Then you should go to the admin panel (Figure C) and start adding departments, categories, users, etc. From this point, the system is incredibly easy to use. Your users will be adding docs, checking docs in and out, and more. Figure C

Click the image to enlarge.

Linux PDF Printer using CUPS-PDF

Like with any other system some time you will need to print to a PDF document to send this to someone else. Ubuntu & linux do not come with a pre-installed PDF printer thus you have to set it up yourself.

Unlike Windows and other Operating systems you do not need to download some special software to allow you to print to pdf, however you are required to install a package which is readily available.

In most installations you can run synaptic or your preferred package manager and install cups-pdf. This will automatically enable your pdf printer, which you can then use to generate your PDFs.

Step by Step Installation.

  1. Open Synaptic package manager & search for cups-pdf

  2. Install this package, by selecting it and clicking install.

or else via terminal 

sudo apt-get update
sudo apt-get install cups-pdf

How to use CUPS PDF

  1. Using the PDF printer is quite easy; all you need to do is issue a print command and select the PDF printer from the list.

  2. A PDF of the printed page will then be available under your home folder, in a directory named PDF. /home/user/PDF

Linux Success Does Not Rely on Microsoft Garbage


Find out why Jack Wallen, TechRepublic, doesn't think Linux should be installed on old, unsupported Windows XP machines. With Windows XP about to finally meet its demise, many users are espousing its replacement with Linux. It makes perfect sense, because there will be millions of machines out there that are no longer supported by Microsoft. Those millions of machines can either add to our already insurmountable garbage problem, or they can continue to be used, sans updates.

Microsoft Windows without updates. That's little more than a security vulnerability in the wings. It would only be a very short matter of time before each and every one of those machines came crashing down. Instead of letting that happen, it's a very seductive proposition to grab a Linux distribution and resurrect that old machine. Why not? It's been part of the war cry of Linux for the longest time. 

Honestly, I'm all for keeping those millions of machines out of the scrap heaps, but I don't know how I feel about the Linux community crying out for everyone to use their out-of-date hardware for Linux. The success of Linux as a legitimate desktop operating system cannot, in any way, hinge on dumpster diving in Microsoft's garbage. In fact, winning the desktop war -- on any front -- cannot (and will not) be had by picking up any of the slack that smacks of the past. Success must begin in the present and quickly move into the future.

Consider this: The speed at which technology advances is now faster than ever. Yes, there's a large faction of people who hold onto the past (for various reasons, such as financial), but the vast majority of people who hold any influence over the world technology look to the future. This is also true of the mobile computing world -- it's all about the latest and greatest. The "what have you done for me lately" mindset is thick. 

With this in mind, Linux needs to embrace the future in ways that no other platform can. But how? By leading the charge of evolution and breaking ground that has yet to be broken. Linux has always been in a very unique position as a platform -- the open source nature means it's not beholden to a corporate entity, nor does it have to follow the same “rules” that tend to shackle Windows and OS X. Linux is free to do and be what it wants. With that wind behind its sails, Linux can re-define how people think about and use their PCs.

Canonical is doing just that with Unity, Xmir, Touch, and more. Although a good percentage of the Linux community is barking up a rather angry tree about the change they're bringing about, it's time they all got over themselves. Linux needs change -- from top to bottom -- and the Linux community needs to let go of the old ideas and ways, because the “I've always done X and X should be the way it is” mindset only hamstrings the platform. Linux needs to be agile, and if it can reclaim its ability to dodge the punches and adapt with lightning-quick reflexes, then there's nothing it can't do in this over-clocked evolutionary society.

Yes, people may grouse about change, but most quickly get over it when they realize that change is for the better. When Linux developers honestly listen and take the suggestions from the community to heart, all those major changes to the desktop can evolve in such a way as to absolutely benefit the end user -- and that is advancement for the people that the masses can stand behind.

However, if Linux continues to hold on to the same dusty war cries it's espoused for years, it won't get anywhere. Sure, Linux can resurrect that old hardware. You can slap Puppy Linux on it, but all you'll get is a lightning-fast computer that can't interact with modern business in a satisfying way. Plus, you'll have an old-school interface and a cumbersome package management system. Don't take this the wrong way, I'm not dogging on Puppy. In fact, I like Puppy Linux... just not as much as I like the idea of Linux pushing the boundaries of modern modality and showing the computing world just what it's capable of.

Linux talk in a language we can all understand

Linux language

etsy, ~/, $HOME, forward slash, root, sudo, CLI

( @ TechRepublic) believes that a language barrier is preventing Linux from being adopted, en mass, on the desktop. Do you think a simplified, standardized language for Linux is the solution? Honestly, the language of Linux doesn’t register on the radar of many computer users. And while it’s a great feeling to be a part of the “in crowd,” that's also one of the reasons why Linux often has a hard time gaining much of a foothold with desktops. Sure, anyone these days can learn a GUI -- but Linux users are challenged to learn a completely different way of thinking, a different language, and a different wiring of the brain.

Instead of a magical place where Documents, Downloads, Music, Desktop, and Favorites reside (what IT people understand as C:\users\USERNAME\ -- as in Windows 7), these folders live in /home/USERNAME/. To you and me, it’s as simple as typing out the command ‘grep.’ To those who aren’t hip to the lingo, this odd place called ‘home’ doesn’t compute. Why do you need a ‘home’ directory? And why do you also call it “tilde forward slash” (or “tilde slash” or “tilde wack”)?

The language of Linux

I've often said that in order for Linux to really make any headway in the realm of the desktop, it has to start targeting the majority of users on the planet. Those users are not:

  • Developers
  • Geeks
  • Gamers
  • Members of Mensa
  • Any of the characters on The Big Bang Theory

Linux needs to start speaking to people who use their computers for Facebook, Twitter, Pinterest, meme creation, email, a document here and there, chatting -- you know, your mom, your little sister and brother, your grandma and grandpa... the real average users who don’t know what a C drive or root partition are and who never (and I mean NEVER) want to issue a command (other than “Eat your vegetables.”)

The language of Linux is something that needs a bit of revision. You could see this happening with the Ubuntu GUI -- slowly they evolved the Linux desktop into something anyone could understand. Take, for instance, the package manager. Ubuntu switched from Synaptic to the Ubuntu Software Center -- a centralized software management tool that's very similar to the highly regarded Apple App Store. The same thing needs to occur with the language. I don’t propose to do a sweeping change to naming conventions that have been around for decades. What I believe is that, possibly, a second “language” needs to be adopted -- one that is simplified and standardized. As much as I don’t like the idea, this new language might have to take a nod from Microsoft or Apple.

So, instead of $HOME, home, or ~/, maybe we have Library. The Library could contain:

  • Desktop
  • Documents
  • Downloads
  • Music
  • Videos

Instead of root or /, we could adopt Windows C Drive nomenclature (or get cute and call it the “L Drive”).

You see where I’m going with this? Language is crucial to helping new user adoption. Confusing them out of the starting gate is the easiest way to lose them. It’s hard enough for those user to learn a new interface, let alone a completely new way of thinking and talking about the way they use their computers. If the language used with the public was drastically simplified, new users wouldn’t be nearly as hesitant to adopt it.

And this new language would hardly affect the core of the Linux community. No changes would need be made to the code or the interfaces. The only noticeable changes might be within the marketing literature or documentation distributed to the public.

Here’s the possible thorn to be angrily jabbed into the side of the Linux community. We all know that adopting standards is something that never seems to fly with Linux. Why is it, when a community growls and snaps at Microsoft for not following standards, something as “no brainer” like as standards are not followed? And nearly every distribution is guilty of this.

The distribution communities would all have to open their eyes and understand this one simple standard would go such a long way to making Linux more accessible to the common user. Adopting a language that everyone can agree on and understand (without putting much thought into the process) could be that magic bullet Linux needs to finally make headway into the desktop.

Let’s talk Linux -- but in a language we can all grasp

Managing Debian and Ubuntu Packages DEB

(Joe Brockmeier) If you've ever thought "there should be a command that does X" for Linux, there probably is. Finding it, however, is not always easy. This is especially true when managing packages on Debian-based systems.

Debian's package tools makes it easy to install and manage packages. For more complex tasks, however, tools are not as well-advertised. Here are five options worth checking out.

Debian's package tools (dpkg, the APT suite and utilities like aptitude) make the basics of installing and managing packages very easy. When you want to do more complex things, however, they're still easy(ish), but the options or tools you want are not as well-advertised.

One thing that is often useful is to know why a package was installed. To find out, we want to use the aptitude utility, which will provide this very easily and quickly. Use aptitude why packagename to find out what package requires or suggests the package.

If you want to install packages that have been "kept back," you'll often hear people suggest that you use dist-upgrade instead of upgrade. However, a better way to do this -- without carrying a bunch of updates forward that you may not want, is to use aptitude instead of apt-get.

Occasionally, you must know what package a file belongs to, or what files are in a package. For a file that's installed, use dpkg -S filename. For example, if you don't have Sendmail installed and want to know what package owns the symlink for /usr/lib/sendmail, you can run dpkg -S /usr/lib/sendmail. In my case, this returns:

postfix: /usr/lib/sendmail

What if you want to know what package would install a file? That's a job for apt-file. Note that this utility may not be installed by default. You'll also need to update its cache by running apt-file update. Then run apt-file filename that you want to see. The more specific you can be, the better. If you look for a single string that's likely to be in many filenames (like "vim"), you'll get quite a few results. If you look for something very specific like /usr/lib/, then it will provide only one result. So if I search for /etc/apache2/apache2.conf even on a system without Apache installed, it will tell me that the package I'm looking for is apache2.2-common.

Last, but definitely not least, let's look at saving a list of all installed software. Say you want to do a clean install of Debian (or a Debian derivative) to upgrade rather than apt-get dist-upgrade, but you don't want to figure out by trial and error what packages you had before -- simply run dpkg--get-selections, and you'll see a full list of packages that are installed. Here, I also notice that my Linux Mint desktop has more than four times as many packages installed as my Debian server.

But what about restoring the packages? That's easy. Run dpkg--get-selections > installed-packages.txt. When you have the clean system, run dpkg--set-selections < installed-packages.txt. Do be sure to back this file up before doing the install, of course.

While I tend to be partial to Debian packages, there's no reason Debian users should have all the fun. Next week, I'll take a look at tips for using RPM and Yum.

Master Your Mac With the Terminal

Even MAC has a command line, and it's Berkley Unix Based. Ya, that's right, MAC os is basically wrapped around Unix, just like linux. So how can we take advantage of the MAC command line?

Let's say, for example, that we want to find out where that pesky 5GB file is hiding, or the path of every file related to that app you thought you deleted. For these jobs and others, the command line or 'Terminal' is your new best friend.

So what is a Terminal? Terminal is a utility that allows you to interact with your Mac through the command line. Linux operating systems include similar tools, since both Linux and macOS are Unix-like OSes. The command line interface (CLI), or the language that you type into Terminal to interact with your Mac, is called bash. Everything we discuss below is a bash command.

Before you start using Terminal, you can customize it to your own personal preference. If you prefer, it’s even possible to download a third-party Terminal alternative for a customized look and feel.

General Mac Command Line Tips

First, let’s look at some basic Terminal facts you should know.

General Syntax

A bash command typically follows this pattern:

[Command] [Options] [Input or Path to File or Directory]

For example, in:

ls -la /Applications

ls is the command, -la is a compound of two individual options (-l and -a), and /Applications is the path to list.

The Path

Understanding paths will help you understand how macOS actually sees your files. Essentially, the path of a file is the Russian dolls’ nest of folders in which it’s contained, followed by the name of the file itself.

For example, on a Mac, the path of a file called My Secrets that lives on user John Doe’s Desktop is /Users/jdoe/Desktop/"My Secrets".

White Space

You must escape white space for the Terminal to process it properly. When bash sees a space, it interprets it as the end of a command. So if you have a folder with spaces in its name, like Path Test, and you try to list its contents with ls /Applications/Path Test, you’ll get this:

Invalid Path Causes Bash Command Failure

What’s going on here? Well, bash thinks that you called ls on /Applications/Path. When it couldn’t find that file, it stopped.

If you want bash to recognize the full name of your folder, you can either wrap the name in quotes or use a backslash, like so:

  • ls /Applications/"Path Test" or
  • ls /Applications/Path\ Test


Many of the commands below require administrator-level access. If you’re not currently signed into administrator account, but you know the administrator’s password, you can place sudo (which stands for “single user do”) in front of the command to temporarily give it administrator-level privileges.

Terminal Commands to Improve Your Workflow

Now that you know the basics, let’s take a look at some extremely handy commands. Note that you can pull up full information on these commands, including all their options and examples, by typing man <command name> into the Terminal.


  • Replaces: Spotlight
  • Why it’s better: It’s faster and searches system folders that Spotlight excludes, or has trouble indexing.

Spotlight tends to skip macOS system files unless you tell it not to, and even then can have trouble indexing them. Conversely, the bash find command can search for anything, in any place, and will output the full path of what you’re looking for.

The syntax of find consists of four parts. In order, they are:

  1. find
  2. the path of the directory you want to search (/Applications below)
  3. options (the below example has -name, which means that find will search for files that match that name)
  4. the string to search (the below example has Google Chrome)

You should know that find uses regex (also called regular expressions). A full explanation of this topic is outside the scope of this article (or anything short of a textbook). However, the below example introduces a vital concept in regex, which is the asterisk (*), or wildcard character.

Putting it at the beginning and end of the search string means that find will output results that have characters before and after the search term. In this case, Google Chrome will bring up Google

It all comes together to look like this:

An Example of the bash find Command


  • Replaces: Cmd + I to show info.
  • Why it’s better: It can show you multiple folders at once, and typically takes less time to load.

du stands for “disk usage,” and can quickly tell you the size of a file or folder, or even a list of files within a folder.

The best options for du are:

  • -d (depth): When followed by a number, tells find to limit its search to a -d level of depth in the directory where it runs.
    • For example, if you run du -d 1 /Applications, it will only show you the total size of the folders and files in your Applications folder, not the sizes of subfolders within those folders.
  • -h (human readable): This will show you the size of your files in KM, or G, which stands for kilo, mega, or gigabytes.

Take a look at du in action:

Bash Command du in Action


  • Replaces: Point-and-click moving of folders and files.
  • Why it’s better: It’s faster and requires no navigation.

You can quickly move a file or folder into another folder using mv. It works by simply changing the name of the path.

The syntax is mv <old file path> <new file path>.

For example, mv /Users/jdoe/Documents/file1 /Users/jdoe/Desktop/file1 will move file1from jdoe’s Documents to his Desktop.


  • Replaces: Cmd + i to show info.
  • Why it’s better: It’s faster, can show info on multiple files at once, and is highly customizable.

ls is an incredibly powerful command for showing you exactly what’s in your folders. It also reveals who’s allowed to see them, if you have any hidden files or folders, and much more.

The best options for ls are:

  • -l (long): Shows the permissions for each file in the folder, the most recent modification time, the file owner, and filename.
  • -a (all): Shows you all the files in a folder, including the hidden files (great for showing the user library in macOS, which is hidden by default).

Here’s what the output looks like:

ls -la In Action


  • Replaces: Finder > File > New Folder
  • Why it’s better: It’s faster, and you can set the name right in the command instead of double-clicking the new folder.

Create new folders in an instant with this command.

Example: mkdir /Users/jdoe/Desktop/cool_stuff


  • Replaces: Moving files to the Trash and emptying it.
  • Why it’s better: It’s faster, and good for deleting pesky files that the Trash won’t get rid of.

This command will delete, immediately and without prejudice, any file you put in its path. Obviously, use it with extreme caution. Unlike clicking Empty Trashrm will not ask if you’re sure. It assumes you know what you’re doing.

One thing to note about rm is that by default, it will only delete files, not folders. To delete folders, you must use the -R option, which stands for recursive.

Example: rm -R /Users/jdoe/Desktop/cool_stuff

Master Your Mac With the Terminal

Now you know some essential Terminal commands and can start integrating them into your daily Mac workflow. Once you get comfortable using bash, you can go beyond simply replacing your everyday tasks and start exploring powers that only the command line can offer.


Mercury Outboard Revs Its OpenSource Engine

A boat with a big Mercury outboard motor from Brunswick Corp. is so retro. What's hot is the company's open-source business integration engine.

Brunswick's technology division, dubbed WDI, built a BIE to connect Brunswick's dealers to its ERP system. With more than 10,000 dealers and numerous distributors, that was no small task. Adding to that complexity, many are mom-and-pop outlets with legacy IT systems (such as dial-up modems linked to an inventory system on a 386-based PC) that made integration and communication with Brunswick's heavy-duty corporate systems almost impossible. But because of the right technology choices the company made in 2001, the integration work continues to excel today.

Technology incompatibility wasn't the only problem; users at first didn't want to share sales data with Brunswick for fear of being put at a competitive disadvantage, since all of their inventory data, including that from Brunswick's competitors, would be available to the supplier. And there was hesitation as well because most of the businesses had never let their data outside of their companies before. "Today, dealers and suppliers understand they need an integration strategy," says Michele Lambert, general manager at Vernon Hills, Ill.-based WDI. "They need to cut the cost of re-entering data, faxing purchase orders and handling customer service issues without the appropriate information."

Once on board with the notion of integration, WDI needed to choose a path. But Brunswick's dealers couldn't afford to make the investment in pricey integration tools, says Lambert. You're talking seven figures for integration packages with difficult-to-prove returns on investment.

Electronic data interchange could have been a solution, but with $20,000 monthly transaction fees for EDI, it would be like getting a sunken boat off a sandbar. Possible, but not likely.

Instead, in 2001 Lambert and the Brunswick team decided to develop their own software and make it available through an open-source license. The XML interfaces of WDI's open-source business engine allow a dealer with a 14,000-part inventory to use low-cost computers and a 14.4Kbit/sec. connection to link and share information with Brunswick.

WDI staff selected Java -- mainly because of an abundance of Java programmers -- and wrote to an open-source standard using Business Process Markup Language as the model for business rules. They adopted XML schemas, Lightweight Directory Access Protocol and Directory Services Markup Language. They created open application programming interfaces so that if the business engine didn't support something, it would be simple to add.

Going open-source was important because the free software pushed adoption among dealers and distributors. (WDI makes its money on services.) Also, independent software vendors in the marine industry chose to embed the engine into existing products. And with an open-source product, WDI got lots more community feedback.

And you thought the folks at Brunswick were just propeller heads.

Microsoft Runs Linux On Azure Cloud

Microsoft is preparing to give its cloud platform users the capability to run Linux on its Windows Azure cloud in 2012, according to a report. The All About Microsoft blog reports that Microsoft is poised to enable customers to make virtual machines (VMs) persistent on Windows Azure and is slated to deliver a Community Technology Preview (CTP) of its persistent VM capability in the spring of 2012.

Read More - Click Here!

Microsoft buys GitHub - What It Means for Open Source

(Gary Guthrie @ ConsumerAffairs) Microsoft Corp. announced on Monday that it has cut a deal to purchase software development platform GitHub.

The acquisition gives Microsoft a boost toward making itself more valuable to clients and another in-road on bringing services to new audiences. GitHub sports an envious client base of 1.5 million companies using the platform as a software development repository, in addition to 28 million software developers working on 18 million repositories of code.

“Microsoft is a developer-first company, and by joining forces with GitHub we strengthen our commitment to developer freedom, openness and innovation,” said Satya Nadella, Microsoft’s CEO. “We recognize the community responsibility we take on with this agreement and will do our best work to empower every developer to build, innovate and solve the world’s most pressing challenges.”

Microsoft’s elation was mirrored on GitHub’s side of the deal, as well. “The future of software development is bright and I’m thrilled to be joining forces with Microsoft to help make it a reality,” said GitHub’s current CEO, Chris Wanstrath. “Their focus on developers lines up perfectly with our own, and their scale, tools and global cloud will play a huge role in making GitHub even more valuable for developers everywhere.”

GitHub has to be happy about the purchase price, too. Microsoft is paying out $7.5 billion in company stock -- nearly three times what GitHub was last valued for in 2015.

Developers are the new kingmakers

Bill Gates once said that “a great lathe operator commands several times the wage of an average lathe operator, but a great writer of software code is worth 10,000 times the price of an average software writer.”

In buying GitHub, Microsoft looks to be setting itself up to have the best software developers on the market.

In Microsoft’s conference call announcing the deal, company CEO Satya Nadella cited a LinkedIn study showing that software engineering roles in industries outside of tech -- such as retail, healthcare, and energy -- are seeing double-digit growth year-over-year, 25 percent faster than the tech industry by itself.

My New Multimedia Computer Uses Ubuntu Linux 12.10 64 bit

Our old multimedia computer set behind our TV stand. It was a Tower Computer, a Dell Optiplex 330 with a Intel Pentium D processor, nvidia 256k graphics card, 4 gigs of ram, a decent sound card, and a couple of 500 gig drives, running Windows 7. It worked fine, but it made a lot of noise and it required a battery backup (we live in the south you ‘all – thunderstorms), and the battery went dead a week ago.

What we do on the multimedia computer - We really like to watch music videos. I played in one band or another all my life, and my dear wife enjoys live music, but hates all the crowds…. So once every couple weeks we negotiate on which entertainer to watch, pop some pop corn, queue up a concert, and rock for the evening.

Ninety nine percent of our videos are in mp4 format. We also watch movies in mp4 or divx avi, and listen to music, either mp3s or

Once in a while we look at family pictures (jpg format), and do some research together. We scanned all of our photo albums (over 5000 pictures) into digital format, and gave the originals to our four daughters. Now we show them on our TVs and SmartPhones.

So as you can see, our requirements are not that high. We simply want good sound, good picture resolution, and no interruptions (bandwidth and cache issues).

New Computer – We received this trade-in last week, a Dell Inspiron 1545 laptop with an Intel Pentium Duo Core processor, 3 gigs of ram, loaded with Dell’s regular video and sound card, running Vista Home Basic (yuck). It looked like it had dropped off a desk, with the LCD display barely attached. For kicks, I brought this wounded puppy home, loaded up a couple videos, and plugged it into our TV and audio system, just to see what came out. We must have been in a rockin’ mood because we loaded up Chili Peppers - Adventures of Rain Dance Maggie, and then found ourselves preparing this laptop to be our multimedia replacement! The performance and sound was as good or better than the tower. Couldn’t believe it!

We didn’t have to do much. I opened her up, cleaned out the fans, disconnected the lcd display, and put her back together. Whilst working on the laptop, we copied the data from the old multimedia computer to the server hard drives in my office.

The next step was a leap of faith. I just had to get rid of Vista., but I didn’t have any Windows 7 licenses lying around. My wife has used Linux exclusively for several years, and our servers run Linux. But nearly all of our customers, of course, run windows. She really pushed hard on the Linux issue, and I figured, why not, let’s give it a try, This is a grant experiment anyway.

So I downloaded Ubuntu Linux 12.04 LTE 64 bit and went to work. I wish I could expound about how I had to find special drivers or write a bunch of scripts, but it just didn’t go down like that. I simply clicked Install, used all of the system defaults, and in about 30 minutes, we had our new multimedia computer. That’s it! Walla! ~DONE~

8/6/2012 Update:
I Decided I wanted a wireless keyboard and mouse on my computer. Did some checking to see what worked best and all I found was wining and complaining about wireless keyboard and mouse not working. I know HP does a lot to make their stuff work on linux so I purchased an HP wireless keyboard and mouse from WalMart for under $30.00 Plugged it in, turned it on, and it worked out of the box. What the heck are they talking about!

The new computer is quiet, fits anywhere, and has it’s own backup battery, it really rocks, and you can’t beat the price!!!

11/9/2012 Update:
Decided up upgrade to Ubuntu 12.10 today. The process was quite uneventful. I simply went into synoptics updates and clicked the upgrade box and clicked submit. Answered yes, yes, yes, and in about an hour it was done. I told it to keep the old config so that my 3rd party drivers would work, then they did.

Only two things

1. I had to reinstall openssh-server so that I could run putty to remotely administer the computer:

ctrl–alt–t on your keyboard to open Terminal

sudo apt-get install openssh-server 

2. Deal with the GSTreamer Bug:

64 bit Ubuntu:

sudo mv /usr/lib/x86_64-linux-gnu/gstreamer-0.10/ /usr/lib/x86_64-linux-gnu/gstreamer-0.10/

32 bit Ubuntu:

sudo mv /usr/lib/i386-linux-gnu/gstreamer-0.10/ /usr/lib/i386-linux-gnu/gstreamer-0.10/

That's it! Tonight we are going to treat ourselves to Aerosmith - Live in Japan 2002 concert.


MySQL - What Is It

Pronounced "my ess cue el" (each letter separately) and not "my SEE kwill." MySQL is an open source RDBMS that relies on SQL for processing the data in the database. MySQL provides APIs for the languages C, C++, Eiffel, Java, Perl, PHP and Python. In addition, OLE DB and ODBC providers exist for MySQL data connection in the Microsoft environment. A MySQL .NET Native Provider is also available, which allows native MySQL to .NET access without the need for OLE DB.

MySQL is most commonly used for Web applications and for embedded applications and has become a popular alternative to proprietary database systems because of its speed and reliability. MySQL can run on UNIX, Windows and Mac OS.

MySQL is developed, supported and marketed by MySQL AB. The database is available for free under the terms of the GNU General Public License (GPL) or for a fee to those who do not wish to be bound by the terms of the GPL.

NASA Leaps for Open Source

One Small Step for NASA, One Giant Leap for Open Source

(Katherine Noyes @ LinuxInsider) "When you really need performance/weight as in the space program, who are you going to call: an OS designed by salesmen in secret and in league with hardware suppliers," asked blogger Robert Pogson, "or an OS designed by computer geeks trying hard in the open to get the last bit of performance and reliability out of hardware?"

"Space: the final frontier." These may be the opening words of the Star Trek series so loved by geeks far and wide, but lately, they've been on the tip of more Linux bloggers' tongues than ever.

Why? Because Linux recently scored a major victory some 230 miles up in the sky. Specifically, Windows got the big heave-ho from the International Space Station, which has now boldly gone on to embrace Linux instead.

"We migrated key functions from Windows to Linux because we needed an operating system that was stable and reliable," explained Keith Chuvala of United Space Alliance, a NASA contractor deeply involved in space shuttle and International Space Station (ISS) operations.

Translation: "Ouch!" as VentureBeat's Jolie O'Dell sagely observed.

Of course, Redmond's pain is Linux's gain -- as increasingly seems to be the case these days. The result? An extra-bright starry twinkle in more than a few FOSS fans' eyes.

'It Doesn't Take a Rocket Scientist'

"GNU/Linux is out of this world," enthused blogger Robert Pogson , for example.

"When you really need performance/weight as in the space program, who are you going to call: an OS designed by salesmen in secret and in league with hardware suppliers, or an OS designed by computer geeks trying hard in the open to get the last bit of performance and reliability out of hardware?"

Indeed, "while it doesn't take a rocket scientist to see the advantages of Linux, I guess it doesn't hurt, either," agreed Google+ blogger Kevin O'Brien.

"As those of us who have used Linux know, it is indeed safer and more reliable," he added.

'A Natural Choice'

"Linux is everywhere, not just by its free nature but for stability, resource-efficiency and openness," Google+ blogger Rodolfo Saenz told Linux Girl. "NASA needed to rely on a powerful open OS to be modified at will, and they've got brilliant developers and resourceful engineers dedicated to research. GNU/Linux was a natural choice for them."

In fact, "I think it would be no surprise if they come up with a NASA closed-Linux distro in the coming years (or even months!)," Saenz suggested.

"Once more, GNU/Linux is an example that great minds (freely and passionately) work better than big corporations," he concluded.

'Easier to Centrally Manage'

It is "very cool that organizations like NASA are starting to take more and more notice of Linux," Linux Rants blogger Mike Stone opined.

"I predict that we're going to see more critical systems moving to Linux because of its open nature and its stability," Stone suggested. "Sadly, Windows XP didn't really set the bar all that high."

The move "makes sense," consultant and Slashdot blogger Gerhard Mack agreed. "Not only should it be more stable, it should be easier to centrally manage. And NASA uses mostly custom apps, so compatibility with Windows is not as important for them."

'Increasingly the Mainstay'

Similarly, "this is good news, though it suggests that the number of scientific tools available on such laptops makes the choice relatively easy," offered Chris Travers, a blogger who works on the LedgerSMB project.

"Presumably they didn't choose something like FreeBSD because of more limited hardware support there (this being said, my sense is that what hardware FreeBSD and the like do support is more stable than under Linux)," Travers added.

"Linux as a kernel supports a wide range of hardware, and *Nix environments (including both *BSD and Linux) are becoming increasingly the mainstay of scientific and engineering environments," he said. "I suspect this is one reason why Autodesk programs run so well under WINE."

'Everything Just Works'

Google+ blogger Brett Legree had a similar take.

"I think that this is awesome news, for the people who work in the ISS and for the Debian project," Legree told Linux Girl. "It makes good sense. Debian runs extremely well on hardware that was spec'd for Windows XP (i.e. in terms of the hardware 'power'), Debian is extremely stable, Debian handles upgrades and updates better than just about anything I have used -- did I mention that it is free?"

Although "sadly we won't be following the same path where I work when Windows XP reaches EOL (we will roll out Windows 7 and Office 2010 ... ), my own personal experiences at home show the sense in this," he added.

"My HP 2133 netbook runs better with Debian and LXDE than it ever did with Windows XP," Legree concluded. "Everything just works, right out of the box."

'This Is Where Linux Fits Best'

Last but not least, Robin Lim, a lawyer and blogger on Mobile Raptor, had a bittersweet interpretation.

"To me, this is really nothing new and exciting," Lim told Linux Girl. "NASA has been using Linux more and more to power its space program. Most recently, they use three Android-powered Nexus One smartphones as part of a low budget experimental satellite program."

In fact, "this is really where Linux fits best," he added. "Its open source nature allows the end user to customize it to fit their needs."

Nevertheless, "it is another success in an area where GNU/Linux has had a long string of successes," Lim suggested.

'All About Apps and Choices'

At the same time, however, "this also highlights the one area where GNU/Linux is weakest: as a desktop computing platform for the average user," he said.

"With a consumer platform, it's all about apps and choices," Lim explained. "What GNU/Linux needs at this point is not a NASA endorsement but a company who can build a stronger software and hardware ecosystem around it -- in other words, an Apple, Google or Microsoft or maybe even Samsung.

"Maybe the community itself could give it a big boost," he suggested. "Retire 95 percent of the distributions and task the resources there to developing consumer Linux-compatible apps."

Katherine Noyes has been writing from behind Linux Girl's cape since late 2007, but she knows how to be a reporter in real life, too. She's particularly interested in space, science, open source software and geeky things in general. You can also find her on Twitter and Google+.

New Drupal WebSites

New Headlines & Stories From Around The World

In the past few months we have brought to you stories about American companies and the US Government implementing Open Source solutions. The article highlights noteworthy international Open Source projects.

IDABC: UK: Open Source reduces cost of London's public transport card system

"Switching to Open Source software reduced by 80 percent the costs for software licensing and hosting of London's Oyster card system which handles payments on the city's buses and underground."
"The IT news site Zdnet last week reported on the move to Open Source by Transport for London (TFL), following a presentation by Michael Robinson, a senior consultant with Deloitte, who discussed the migration at the Open Source Forum event in London."
Complete Story

BG: Government's increasing use of Open Source inevitable

"The Bulgarian government will turn more and more to Open Source software, predicts Krasimir Panayotov, coordinator of the GNU/Linux User Group in the city of Rousse, the country's fifth-largest city."

"The Open Source advocacy group are confident that Bulgarian public administrations migrating to this type of software find it reduces the cost of IT and at the same time improves the security of their systems. To help get this message across, the user group is organizing software demonstrations next month tailored especially for representatives of public administrations."
Complete Story

IDABC: FR: Education Ministry encourages Open Source use

"The department at the French Ministry of Education that is handling purchasing of software and software licenses is increasing its Open Source offerings to some 1.5 million teachers and education workers in 250 institutes France."

"This is how Dominique Verez, spokesperson for the 'Software Group for Higher Education and Research', explains a recent agreement it signed with Mandriva, a French company developing a GNU/Linux distribution by the same name. The two agreed last month on a 60 percent discount for the purchase of the commercial version of the free software for all teachers and staff at France's schools and universities. "Our goals are to promote alternative solutions, to offer more choice and to make our users less dependent on software vendors."
Complete Story

IDABC: UK Companies to support schools using Open Source

"At least three British companies specializing in Open Source software have submitted tenders for a 270,000 GBP (340,000 euro) project to support a sustainable and significant community of schools using and developing Open Source."

"The project, titled 'Schools Open Source Project' was launched earlier this month by the British educational IT agency Becta. "We wish to ensure that schools are aware of and can access the wide variety of Open Source software in the marketplace." To achieve this, Becta says, it must organize support in adoption, deployment, use and ongoing development. The IT organization wants this project to result in a sustainable and significant community of schools that use and develop open source products."
Complete Story

CIO: Open Source is Entering the Enterprise Mainstream, Survey Shows

"Open-source solutions used to be adopted quietly by company boffins who snuck in an Apache Web server or an open-source development tool suite under the philosophy "It's easier to get forgiveness than permission" (not to mention "It's easier to do it with open-source tools than to get an IT budget")."
Complete Story

IDABC: Administrative Court publishes automatic document conversion tool

"The Dutch Council of State is willing to Open Source its application that can centrally convert documents between open formats and proprietary formats, said Marcel Pennock, the tool's developer, Wednesday at a conference on Open Document Format (ODF) in Utrecht."
"The administrative courthouse's in the Hague is currently testing his conversion software. "As far as I am concerned, the software will be released as Open Source."
Complete Story

InternetNews: Open Source Making the Grade in Higher Education

"Open source and higher education have a long and storied history. After all, BSD Unix originally came from the University of California at Berkeley, and Linux itself was created while Linus Torvalds was a student at the University of Helsinki."
"Yet new research from Gartner indicates that open source is taking hold in universities in more areas than ever before, a fact supported by a string of wins from commercial open source vendor GroundWork."
Complete Story

IDABC: Proof of progress on Open Source Geographic Information Systems

"A group of European scientists will be showcasing their evaluation of Open Source Geographic Information Systems (GIS) at a conference organized in Warsaw, Poland, in early June."
"The conference is meant for, among others, public administrators interested in Open Source GIS applications and its use for security and environment monitoring."
Complete Story

CNet: The most important open-source Google

"Chris Dibona, head of Google's open source program office, sat down to talk with CNET's Stephen Shankland. In the course of that interview, Chris provided great insight into how Google views open source and contributes back to the various communities from which it derives benefit."
Complete Story

IDABC: Geneva schools completely switch to Open Source

"About 70,000 students and their 7,000 teachers in the Geneva school district will gradually be moving to Open Source."
"The decision to move to Open Source was taken by the Geneva Public School District (Département de l’Instruction Publique Genevois (DIP) in March 2006, says Manuel Grandjean, project leader for the Geneva district's Open Source migration. "The district wants Open Source software to become the default." Complete Complete Story


Norway goes open source by Iain Thomson

Government initiative harpoons Microsoft

The Norwegian Minister of Modernization, Morten Andreas Meyer, has promised that his government will stop using proprietary software and transfer to open source.

Speaking at the eNorge 2009 conference Meyer outlined an initiative to digitize government relations. This includes a commitment that all public institutions will plan the introduction of open source systems by next year.

He also said that every citizen would be given their own home page on the government's servers to make dealing with the state easier.

"Proprietary formats will no longer be acceptable in communication between citizens and government," explained Meyer.

While he did not mention Microsoft by name, Meyer did make references to " the spreadsheet almost everyone uses" and commented that this would be the last time he made a presentation using the software.

The Norwegian Competition Authority is reportedly considering investigating Microsoft after a recent deal with schools left other competitors' software blocked.

"If one has a monopoly or is a very big player one is interested in maintaining the hegemony. In addition, the public sector has great power in the software market because it is a very big customer and can make demands," said Christine Hafskjold, a spokeswoman for the Norwegian Board of Technology.

The announcement will be seen as a serious blow to the credibility of Microsoft's initiative to sell e-government software. The UK government is actively investigating the greater use of open source in its systems at a county and city council level.

Open Office Find and Replace

Find and Replace

Find and replace are combined in OOo, unlike MSO. There is no separate menu entry or key shortcut for replace. Use Edit > Find & Replace, or Control+F , or click the Find & Replace icon on the Standard toolbar.


There is no “word forms” search.

Having performed a search, and having closed the dialog box, the keyboard shortcut to repeat the search is Control+Shift+F.

Searches are paragraph-based. There is no way to search for text either side of a paragraph marker. For example, OOo cannot search for two blank paragraphs. To get around this problem, and some other issues, a macro has been developed. See IannzFindReplace (last updated 20 March 2006), available from


Manual page breaks are handled internally by changing the paragraph format of the first paragraph on the new page. When searching for formats this option is grayed out because there is no way to search for “manual” page breaks.

Tips for find and replace

It is common to do several find and replace operations on the same selection, however OOo does not “remember” the original selection. Before doing the find and replace you can bookmark the selection in Writer, or define a range name for the selection in Calc, so that, by using the Navigator, the same range can be quickly reselected.

Doing a “find all” selects all the pieces of text that match the criteria. It is possible to perform, on all the selections, most operations that are possible on a single selection.


The Attributes button is only available in Writer’s Find & Replace dialog. This button displays a dialog with a series of checkboxes to find where a particular attribute has been changed from the default for the underlying style. For example, selecting the attribute “Font weight” will find text that has been made bold where the underlying font is not bold (and vice versa).


The attributes settings remain between uses of the Find & Replace dialog. This can be frustrating, so always turn off all of the attribute settings after each use.


This is the same search concept as in MSO. Note that when formats are used, there is an option to include searching within styles. For example, searching for bold text would not find bold text where the style is bold unless this option is checked.

Regular expressions

“Regular expressions” are significantly different in OOo from MSO’s “Use wildcards”. See Help > help > Index tab > and type in “regular expressions” then move to “Searching” and press Display. Some common examples are in Table 1. To use regular expressions, click the More Options button of the Find & Replace dialog and make sure the Regular expressions checkbox is checked. On reopening the Find & Replace dialog, the Regular Expressions checkbox is always unchecked.

Table 1. Sample regular expressions




Replace multiple tabs with just one tab



Replace multiple spaces with just one space. “[:space:]” finds both non-breaking spaces and normal spaces but not tabs. Type a normal space in the Replace field.



Remove leading white space (space or tabs in any combination) at the start of a paragraph.



Remove trailing white space (space or tabs in any combination at end of paragraph).



Find paragraphs beginning with the character “a” (the rest of the paragraph can vary) and replace the whole paragraph with a blank line.



Remove a paragraph mark from the end of lines. for example, when having pasted text from an e-mail message.



Replace paragraph marks with a comma so that there is one long line rather than many lines.



Replace commas with a paragraph mark.



Replace line breaks (Shift+Enter) with paragraph markers. (Note that \n is used for both the Search and Replace fields. In search it is interpreted as a newline and in replace as a paragraph mark. There is no ability to have a line break in the replace field.)



Find the word “the” only (do not find “then” or “bathe”).



Find “ing” at the end of a word, for example reading or writing but not singer.



Find whole words that end with “ing”. Note that there is a space between the caret and the close-square-bracket character.

[^ ]*ing\>


Select all numbers at the start of a line where the numbers could include a period, for example 1.1., 1.13.2 and




The asterisk “*” means any number of the preceding character. Where in MSO you might have just “*” the equivalent in OOo is “.*” because “.” stands for any single character (like MSO’s “?”).

A macro that makes it easier to use regular expressions in Writer, and allows searching for “page breaks” and for things such as multiple blank paragraphs, is available in a document called IannzFindReplace.sxw available from

Similarity search

The Similarity search option broadens the search so that what is found does not have to be exactly the same as what was specified in the Search for field. To specify how different it can be, select the Similarity search checkbox in the Find & Replace dialog.

Search for styles

Writer and Calc have an option in the Find & Replace dialog: Search for Styles (which changes to Within Styles if Format or Attribute search is used). Check this first if you are searching for a particular style. The Search for Styles field changes to a listing of the paragraph styles in use.

Open Office Mail Merge

Mail Merge in OpenOffice Writer: Creating Mail Merge Documents From Text/CSV or Spreadsheets

I've got a lot of info out there, including lots of coverage in my book, about mail merges. However, I don't have a nice simple straightforward blog on it with everything in the same place all spelled out. Didn't, that is. This is all you need to do to make a nice simple document based on data in text files or spreadsheets.

What You Have to Do

1. Get your data. You've already got it, probably. This blog  is for people with data in text files, and in spreadsheets.

2. Turn it into a data source.

3. Create your mail merge document and suck the data in through the data source.

4. Print, specifying how many of the data records you want to print for, and whether to print to a file or printer.

1. Get Your Data

You probably already have it. It's in a .txt file or .csv that's comma or tab separated, perhaps. Or it's just a spreadsheet.

2. Make the Data Source: Text File Instructions

If your data is in text files, follow these steps.

1. Choose File > New > Database.

2. Make the selection shown, with Text as the format.


3. Click Next.

4. Specify the DIRECTORY where the text files are. Each text file in that directory will be a table in your database. Then select the item separating fields, i.e. a tab or comma or something else.


5. When all the settings look correct, click Next.

6. Umark the option to open the database for editing. You can open it; you just don't have to.


7. Click Next.

8. Save the data source (aka database) under a name that will help you remember what it is.


You're done.

2. Make the Data Source: Spreadsheet Instructions

If your data is in a spreadsheet, follow these steps.

1. Choose File > New > Database.

2. Make the selection shown, with Spreadsheet as the format.


3. Click Next.

4. Specify the spreadsheet file. Each SHEET in that spreadsheet will be a table in your database.


5. Click Next.

6. Umark the option to open the database for editing. You can open it; you just don't have to.


7. Click Next.

8. Save the data source (aka database) under a name that will help you remember what it is.

You're done.

3. Create Your Mail Merge Document and Suck the Data In From the Datasource

You can also use the simple or complex mail merge.



But this is a nice way to do it too.

1. Create a new Writer document or open a document containing text that you want in the mail merge document.

2. Choose View > Data Sources. Everything you've created will be displayed. Click the + sign by the data source you want to use, then click + by Tables til you see the data you want to use.


3. Type any content you want and do any formatting. You can do this later too.


4. Click on the NAME OF THE FIELD, not the piece of data, that you want in the mail merge.


5. Drag it into the document and release. The field name will appear.


6. Add any other content and fields you want.


Save the document. You're ready to print.

4. Print the Mail Merge Document.

1. Choose File > Print.



3. In the print window, specify the range of records, if you don't want them all, and specify to print to a printer, or to files.


4. Click OK.

5. In the print window, specify the printer and click Print.





Open Source Digital Signage

Picture of Digital sign-age on the cheap

Digital store front window sign-age can be expensive. You may not have the technical know how to manage or create updates to the information seen on the display. Here is how you can create and manage your own sign-age for less than you might expect using freeware and a little imagination.


Step 1: Got an old obsolete computer?

Picture of Got an old obsolete computer?

I picked one up from the curb similar to the one above. It had Windows 98, a 933Mhz Pentium III with a 20 Gig hard drive and 384Meg of ram. Yikes, no wonder it was put to the curb. However, many people do not realize that a computer such as this is perfect as a one trick pony, doing one task continuous throughout the day without an ounce of struggling. This fellow was resurrected as a control unit for digital sign-age.


Step 2: Decorate the covers before starting

Picture of Decorate the covers before starting

I went to the recycling center and found some antiquing spray paint. It made for an interesting texture to the cabinet.


Step 3: I used Ubuntu for mine.

Picture of I used Ubuntu for mine.

I'm sure you could use other versions of linux. Given that this pc was ancient, I used an earlier version of Ubuntu. Version 10.04 worked well on this system. You can download previous versions at no charge since it is all freeware. Version 10.04 desktop has open office already installed. This will run the Libre Office Impress Presentation with no troubles. If you have a newer (older) computer use the current version of Ubuntu if it meets the specs. These have Libre Office pre-installed. Once linux is loaded, you will have to set the screen and computer to never sleep. This is generally done under screen lock and power sections under the settings menu.


Step 4: Create your impress presentation

Picture of Create your impress presentation

Gather your pictures, ideas, and documents to create a presentation. This is easier than one might think. It will take a little trial and error if you are not familiar with the program. The learning curve isn't too steep though for a digital sign-age type presentation. Since the computer doing the presentation was quite old, I created the presentation on a newer computer. This saved time and frustration. Remember, the older box need only play it, not actually create it.


Step 5: Loop your presentation

Picture of Loop your presentation

Do this by setting the type to auto and time to 00:00:00. This is under the slide show, slide show settings . This will loop your presentation indefinitely.


Step 6: Start your presentation

Picture of Start your presentation

You might need to buy a large flatscreen for your display or use what you currently have. For the most part a new 32 inch tv generally has a computer video port and is $200 or less. The only real money in this project is the display. Otherwise everything is is re-purposing an older computer with a free operating system and it's associated freeware. Once the presentation is running, you may remove the usb mouse and keyboard for a clean look. If you are using a ps2 mouse and keyboard, you should not unplug them as it might damage the motherboard. The system will run uninterrupted 24/7 until you replug the mouse and keyboard to stop it. Think of how it would look on your store front or office reception with a 60 inch monitor. Why pay several thousand for a commercial system, a 60 inch plain flatscreen can be had for less than $500 now. Again, the only real expense is the monitor. In our case, this presentation is in corner of our sales floor, using an older monitor presenting the latest products for sale.



Open Source Digital Signage

Picture of Digital sign-age on the cheap

Digital store front window sign-age can be expensive. You may not have the technical know how to manage or create updates to the information seen on the display. Here is how you can create and manage your own sign-age for less than you might expect using freeware and a little imagination.


Step 1: Got an old obsolete computer?

Picture of Got an old obsolete computer?

I picked one up from the curb similar to the one above. It had Windows 98, a 933Mhz Pentium III with a 20 Gig hard drive and 384Meg of ram. Yikes, no wonder it was put to the curb. However, many people do not realize that a computer such as this is perfect as a one trick pony, doing one task continuous throughout the day without an ounce of struggling. This fellow was resurrected as a control unit for digital sign-age.


Step 2: Decorate the covers before starting

Picture of Decorate the covers before starting

I went to the recycling center and found some antiquing spray paint. It made for an interesting texture to the cabinet.


Step 3: I used Ubuntu for mine.

Picture of I used Ubuntu for mine.

I'm sure you could use other versions of linux. Given that this pc was ancient, I used an earlier version of Ubuntu. Version 10.04 worked well on this system. You can download previous versions at no charge since it is all freeware. Version 10.04 desktop has open office already installed. This will run the Libre Office Impress Presentation with no troubles. If you have a newer (older) computer use the current version of Ubuntu if it meets the specs. These have Libre Office pre-installed. Once linux is loaded, you will have to set the screen and computer to never sleep. This is generally done under screen lock and power sections under the settings menu.


Step 4: Create your impress presentation