If you - like myself - are dealing with more than a single keyboard layout (e.g. Russian and English) you might have experienced the pain of using on-screen keyboard even if the dock is attached. Look no more - here's the easy 2 steps solution for your troubles.
I think it really doesn't matter if you're using Android keyboard or Hacker's keyboard (like I do), as this functionality seems to be coming from dock keyboard driver.Now...
Open up Settings -> Language & Input
Under Physical Keyboard select "asusdec" and configure it to your liking.
To switch between layouts just hit Ctrl-Space and you're all set. Enjoy!
In short - just get rid of the stock ASUS software as fast as possible. I've been foolish enough to put up with it for far too long. Despite of often non-warranted reboots, sluggish performance I didn't try to get rid of it for about a year. Finally, it pushed me over the edge with 12 reboots over the last two days.
Here's all I've literally done:
Get Titanium backup from the Play Store. I can't more highly recommend Pro version as it makes backups and especially restores a breeze.
Back you applications and user data. While not required, I've used an external sd card for that - not the embeded device storage.
Follow up the instructions from CyanogenMod Wiki Now, I happened to have TWRP recovery image installed. So, it works just fine. However, makesure that bootloader version is 10.6.1.14.4 or higher. You might want to check my earlier post on the rooting of the ASUS 700t
I have chosen to install release version 10.2 (along with the recommended GoogleApp zip file from their download area.
Once in the recovery, I have wiped up the system, caches, data and did Factory Reset on top of it. Then the zip files downloaded earlier were installed and the device rebooted for the first time.
Now the first boot took a couple of minutes - I guess some initial steps were done at first time. So be patient. A very easy and self explanatory setup procedure starts immediately once the boot process is over. Once the system is configured you should have the access to the Internet, your email, calendar, and Market.
Now it is time to restore your stuff back to its original glory :) I recommend to install TitaniumBackup from the Market first. Then run it and change preferences to point to your earlier backup location. From there I recommend to restore TitaniumBackup PRO - that will make the restoration process so much easier.
During the restore you can safely avoid any of the annoying ASUS apps and services if you don't need them. I acctually recommend not to restore Device Unlock app - for whatever reason restore process hung on that in my case.
Everything is restored. Now one more reboot just in case and you have your system back - flying high and fast.
One thing you might want to pay some attention to is new Privacy Guard, that allows you to restrict what the apps can learn and share about you. In other words, now you have a fine control over your personal data and prevent apps from imposing totally insane and unrealistic permission settings.
What I've noticed immediately is that I no longer have the blank message issue in my K-9 Mail, that been haunting me for like 6 months. So, it is gone for good! Keyboard works perfectly - I am typing this blog on my Transformer. So by all means folks - get youself CyanogenMod and experiene like brand new, fast tablet!
In the couple of days left before the year end I wanted to look back and reflect on what has happened so far in the IT bubble 2.0 commonly referred to as "BigData". Here are some of my musings.
Let's start with this simple statement: BigData is misnomer. Most likely it has been put forward by some PR or MBA schmuck with no imagination whatsoever, who thought that terabyte consists of 1000 megabytes ;) The word has been picked up by pointy-haired bosses all around the world as they need buzzwords to justify their existence to people around. But I digressed...
So what has happened in the last 12 months in this segment of software development? Well, surprisingly you can count real interesting events on one hand. To name a few:
Fault tolerance in the distributed systems got to the new level with NonStop Hadoop, introduced by WANdisco earlier this year. The idea of avoiding complex screw-ups by agreeing on the operations up-front is leaving things like Linux HA, Hadoop QJM, and NFS based solutions rolling in the dust in the rear-view mirror.
Hadoop HDFS is clearly here to stay: you can see customers shifting from platforms like Teradata
towards cheaper and widely supported HDFS network storage; with EMC
(VMWare, Greenplum, etc.) offering it as the storage layer under
Greenplum's proprietary PostegSQL cluster and many others.
While enjoying a huge head start, HDFS has a strong while not very obvious competitor - CEPH. As some know, there's a patch that provides CEPH drop-in replacement for HDFS. But where it get real interesting is how systems like Spark (see next paragraph) can work directly on top of CEPH file-system with a relatively small changes in the code. Just picture it:
distributed Linux file-system <-> high-speed data analytic
Drawing conclusions is left as an exercise to the readers.
With the recent advent and fast rise of new in memory analytic platform - Apache Spark (incubating) - the traditional, two bit, MapReduce paradigm is loosing the grasp very quickly. The gap is getting wider with new generation of the task and resource schedulers gaining momentum by the day: Mesos, Spark standalone scheduler, Sparrow. The latter is especially interesting with its 5ms scheduling guarantees. That leaves the latest reincarnation of the MR in the predicament.
Shark - SQL layer on top of Spark - is winning the day in the BI world, as you can see it gaining more popularity. It seems to have nowhere to go but up, as things like Impala, Tez, ASF Drill are still very far away from being accepted in the data-centers.
With all above it is very exciting to see my good friends from AMPlab spinning up a new company that will be focusing on the core platform of Spark, Shark and all things related. All best wishes to Databricks in the coming year!
Speaking of BI, it is interesting to see that Bigdata BI and BA companies are still trying to prove their business model and make it self-sustainable. The case in point, Datameer with recent $19M D-round; Platfora's last year $20M B-round, etc. I reckon we'll see more fund-raisers in the 107 or perhaps 108 of dollars in the coming year among the application companies and platform ones. Also new letters will be added to the mix: F-rounds, G-rounds, etc. as cheap currency keeps finding its way from the Fed through the financial sector to the pockets of VCs and further down to high-risk sectors like IT and software development. This will lead to over-heated job market in the Silicon Valley and elsewhere followed by a blow-up similar to but bigger than 2000-2001. It will be particularly fascinating to watch big companies scavenging the pieces after the explosion. So duck to avoid shrapnel.
Stack integration and validation has became a pain-point for many. And I see the effects of it in shark uptake of the interest and growth of Apache Bigtop community. Which is no surprise, considering that all commercial distributions of Hadoop today are based or directly using Bigtop as the stack producing framework.
While I don't have a crystal ball (would be handy sometimes) I think a couple of very strong trends are emerging in this segment of the technology:
HDFS availability - and software stack availability in general - is a big deal: with more and more companies adding HDFS layer into their storage stack more strict SLAs will emerge. And I am not talking about 5 nines - an equivalent of 5 minutes downtime per year - but rather about 6 and 7 nines. I think Zookeeper based solutions are in for a rough ride.
Machine Learning has a huge momentum. Spark summit was a one big evidence of it. With this comes the need to incredibly fast scheduling and hardware utilization. Hence things like Mesos, Spark standalone and Sparrow are going to keep gaining the momentum.
Seasonal lemming-like migration to the cloud will continue, I am afraid. The security will become a red-hot issue and an investment opportunity. However, anyone who values their data is unlikely to move to the public cloud, hence - private platforms like OpenStack might be on the rise (if the providers can deal with "design by committee" issues of course).
Storage and analytic stack deployment and orchestration will be more pressing than ever (no, I am talking about real orchestration, not cluster management software). That's why I am looking very closely on that companies like Reactor8 are doing in this space.
So, last year brought a lot of excitement and interesting challenges. 2014, I am sure, will be even more fun. However "living in the interesting times" might a curse and a blessing. Stay safe, my friends!
Do you know what are SiliconAngle and Wikibon project? If not - check them out soon. These guys have a vision about next generation media coverage. I would call it '#1 no-BS Silicon Valley media channel'. These guys are running professional video journalism with a very smart technical setup. And they aren't your typical loudmouth from the TV: they use and grok technologies they are covering. Say, they run Apache Solr in house for real-time trends processing and searches. Amazing. And they don't have teleprompters. Nor screenplay writers. How cool is that?
At any rate, I was invited on their show, theCube, last week at the last day of Hadoop Summit. I was talking about High Availability issues in Hadoop. Yup, High Availability has issues, you've heard me right. The issue is the lesser than 100% uptime. Basically, even if someone claims to provide 5-9s (that is 99.999% uptime) you still looking at about 6 minutes a year downtime of the mission critical infrastructure.
If you need 100% uptime for you Hadoop, then you should be looking for Continuous Availability. Curiously enough, the solution is found in the past (isn't that always the case?) in so called Paxos algorithm that has been published by Leslie Lamport back in 1989. However, original Paxos algorithm has some performance issues and generally never been fully embraced by the industry and it is rarely used besides of just a few tech savvy companies. One of them - WANdisco - has applied it first for Subversion replication and now for Hadoop HDFS SPOF problem and made it generally available is the commercial product.
And just think what can be done if the same technology is applied to mission critical analytical platforms such as AMPlab Spark? Anyway, watch the recording of my interview on theCube and learn more.